International Journal of Advanced Artificial Intelligence Research

  1. Home
  2. Archives
  3. Vol. 2 No. 01 (2025): Volume 02 Issue 01
  4. Articles
International Journal of Advanced Artificial Intelligence Research

Article Details Page

ADAPTIVE SIMILARITY-DRIVEN APPROACHES FOR CONTINUAL LEARNING: BRIDGING TASK-AWARE AND TASK-FREE PARADIGMS

Authors

  • Farhad Nouri Faculty of Computer Engineering, University of Tehran, Iran
  • Dr. Mohammadreza Nouri Department of AI and Robotics, University of Tabriz, Tabriz, Iran

DOI:

https://doi.org/10.55640/ijaair-v02i01-01

Keywords:

Continual learning, adaptive learning, similarity-driven approach, task-aware learning

Abstract

Continual learning aims to enable models to learn sequential tasks without forgetting previously acquired knowledge. This paper presents an adaptive similarity-driven framework that bridges the gap between task-aware and task-free paradigms in continual learning. By leveraging similarity metrics to dynamically adjust learning strategies based on incoming data distributions, the proposed approach allows models to maintain performance across tasks without relying on explicit task boundaries. Experimental evaluations on benchmark datasets demonstrate that the adaptive similarity-driven method outperforms traditional task-aware and task-free models in mitigating catastrophic forgetting while preserving scalability. The findings offer a promising direction for developing flexible and efficient continual learning systems adaptable to real-world scenarios.

References

Abati, D., Tomczak, J., Blankevoort, T., Calderara, S., Cucchiara, R., & Bejnordi, B. (2020). Conditional channel gated networks for task-aware continual learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Achille, A., Eccles, T., Matthey, L., Burgess, C., Watters, N., Lerchner, A., & Higgins, I. (2018). Life-long disentangled representation learning with cross-domain latent homologies. Advances in Neural Information Processing Systems (NIPS).

Adel, T., Zhao, H., & Turner, R. (2020). Continual learning with adaptive weights (CLAW). International Conference on Learning Representations (ICLR).

Ahn, H., Kwak, J., Lim, S., Bang, H., Kim, H., & Moon, T. (2021). SS-IL: Separated softmax for incremental learning. Proceedings of the IEEE/CVF International Conference on Computer Vision.

Ahn, H., Lee, D., Cha, S., & Moon, T. (2019). Uncertainty-based continual learning with adaptive regularization. Advances in Neural Information Processing Systems (NeurIPS).

Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., & Marchand, M. (2014). Domain adversarial neural networks. CoRR, abs/1412.4446.

Akyurek, A., Akyurek, E., Wijaya, D., & Andreas, J. (2021). Subspace regularizers for few-shot class incremental learning. arXiv preprint arXiv:2110.07059.

Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., & Tuytelaars, T. (2018). Memory aware synapses: Learning what (not) to forget. European Conference on Computer Vision (ECCV).

Aljundi, R., Caccia, L., Belilovsky, E., Caccia, M., Lin, M., Charlin, L., & Tuytelaars, T. (2019a). Online continual learning with maximally interfered retrieval. Advances in Neural Information Processing Systems (NeurIPS).

Aljundi, R., Kelchtermans, K., & Tuytelaars, T. (2019b). Task-free continual learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Aljundi, R., Lin, M., Goujaud, B., & Bengio, Y. (2019c). Gradient based sample selection for online continual learning. Advances in Neural Information Processing Systems (NeurIPS).

Aljundi, R., Rohrbach, M., & Tuytelaars, T. (2019d). Selfless sequential learning. International Conference on Learning Representations (ICLR).

Andle, J., Payani, A., & Sekeh, S. (2023). Investigating the impact of weight sharing decisions on knowledge transfer in continual learning. arXiv preprint arXiv:2311.09506.

Arani, E., Sarfraz, F., & Zonooz, B. (2022). Learning fast, learning slow: A general continual learning method based on complementary learning system. International Conference on Learning Representations (ICLR).

Ayub, A., & Wagner, A. (2021). EEC: Learning to encode and regenerate images for continual learning. International Conference on Learning Representations (ICLR).

Bakker, B., & Heskes, T. (2003). Task clustering and gating for Bayesian multitask learning. Journal of Machine Learning Research (JMLR).

Banayeeanzade, M., Mirzaiezadeh, R., Hasani, H., & Baghshah, M. (2021). Generative vs discriminative: Rethinking the meta-continual learning. Advances in Neural Information Processing Systems (NeurIPS).

Beaulieu, S., Clune, J., & Cheney, N. (2021). Continual learning under domain transfer with sparse synaptic bursting. arXiv preprint arXiv:2108.12056.

Ben-David, S., Blitzer, S., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. (2010). A theory of learning from different domains. Machine learning, 79(2), 151–175.

Benavides-Prado, D., Koh, Y., & Riddle, P. (2020). Towards knowledgeable supervised lifelong learning systems. Journal of Artificial Intelligence Research (JAIR), 68, 159–224.

Benavides-Prado, D., & Riddle, P. (2022). A theory for knowledge transfer in continual learning. Conference on Lifelong Learning Agents (CoLLAs).

Benzing, F. (2020). Understanding regularisation methods for continual learning. arXiv preprint arXiv:2006.06357.

Borsos, Z., Mutny, M., & Krause, A. (2020). Coresets via bilevel optimization for continual learning and streaming. Advances in Neural Information Processing Systems (NeurIPS).

Buzzega, P., Boschini, M., Porrello, A., Abati, D., & Calderara, S. (2020). Dark experience for general continual learning: a strong, simple baseline. Advances in Neural Information Processing Systems (NeurIPS).

Caccia, L., Aljundi, R., Asadi, N., Tuytelaars, T., Pineau, J., & Belilovsky, E. (2022). New insights on reducing abrupt representation change in online continual learning. International Conference on Learning Representations (ICLR).

Caccia, L., Belilovsky, E., Caccia, M., & Pineau, J. (2020a). Online learned continual compression with adaptive quantization modules. International Conference on Machine Learning (ICML).

Caccia, M., Rodriguez, P., Ostapenko, O., Normandin, F., Lin, M., Caccia, L., Laradji, I., Rish, I., Lacoste, A., Vazquez, D., & Charlin, L. (2020b). Online fast adaptation and knowledge accumulation (OSAKA): A new approach to continual learning. Advances in Neural Information Processing Systems (NeurIPS).

Caruana, R. (1997). Multi-task learning. Machine Learning.

Cha, S., Hsu, H., Hwang, T., Calmon, F., & Moon, T. (2021a). CPR: Classifier-projection regularization for continual learning. International Conference on Learning Representations (ICLR).

Cha, S., Kim, B., Yoo, Y., & Moon, T. (2021b). SSUL: Semantic segmentation with unknown label for exemplar-based class-incremental learning. Advances in Neural Information Processing Systems (NeurIPS).

Chaudhry, A., Dokania, P., Ajanthan, T., & Torr, P. (2018). Riemannian walk for incremental learning: Understanding forgetting and intransigence. arXiv preprint arXiv:1801.10112.

Chaudhry, A., Khan, N., Dokania, P., & Torr, P. (2020). Continual learning in low-rank orthogonal subspaces. Advances in Neural Information Processing Systems (NeurIPS).

Chaudhry, A., Ranzato, M., Rohrbach, M., & Elhoseiny, M. (2019a). Efficient lifelong learning with A-GEM. International Conference on Learning Representations (ICLR).

Chaudhry, A., Rohrbach, M., Elhoseiny, M., Ajanthan, T., Dokania, P., Torr, P., & Ranzato, M. (2019b). Continual learning with tiny episodic memories. arXiv preprint arXiv:1902.10486.

Chen, Z., & Liu, B. (2016). Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 10, 1–145.

Choi, E., Lee, K., & Choi, K. (2019). Autoencoder-based incremental class learning without retraining on old data. arXiv preprint arXiv:1907.07872.

Downloads

Published

2025-01-15

How to Cite

ADAPTIVE SIMILARITY-DRIVEN APPROACHES FOR CONTINUAL LEARNING: BRIDGING TASK-AWARE AND TASK-FREE PARADIGMS. (2025). International Journal of Advanced Artificial Intelligence Research, 2(01), 1-6. https://doi.org/10.55640/ijaair-v02i01-01

How to Cite

ADAPTIVE SIMILARITY-DRIVEN APPROACHES FOR CONTINUAL LEARNING: BRIDGING TASK-AWARE AND TASK-FREE PARADIGMS. (2025). International Journal of Advanced Artificial Intelligence Research, 2(01), 1-6. https://doi.org/10.55640/ijaair-v02i01-01

Similar Articles

1-10 of 14

You may also start an advanced similarity search for this article.