International Journal of Advanced Artificial Intelligence Research

  1. Home
  2. Archives
  3. Vol. 2 No. 10 (2025): Volume 02 Issue 10
  4. Articles
International Journal of Advanced Artificial Intelligence Research

Article Details Page

Deep Contextual Understanding: A Parameter-Efficient Large Language Model Approach To Fine-Grained Affective Computing

Authors

  • Dr. Elara V. Sorenson Department of Computational Linguistics, Institute for Cognitive Science, Berlin, Germany

Keywords:

Affective Computing, Large Language Models, Fine-Grained Emotion, Parameter-Efficient Fine-Tuning

Abstract

Background: Traditional methods in Affective Computing often fail to capture the subtle, context-dependent shifts necessary for fine-grained emotion classification due to limited semantic understanding and high reliance on hand-crafted features. While Large Language Models (LLMs) offer superior contextual depth, their immense computational cost hinders domain-specific fine-tuning and practical deployment.

Methods: This study leverages a pre-trained Transformer-based LLM (comparable to RoBERTa-Large) and applies a Parameter-Efficient Fine-Tuning (PEFT) methodology, specifically Low-Rank Adaptation (LoRA), to a complex, multi-label dataset of 11 discrete emotional states. We systematically compare the performance of LoRA against a traditional Bi-LSTM baseline and a Full Fine-Tuning (FFT) LLM, while also conducting a detailed ablation study on LoRA's rank () and scaling factor () to determine the optimal balance between performance and efficiency.

Results: The LLM (PEFT-LoRA) model achieved a decisive performance increase, resulting in a   score, outperforming the Bi-LSTM baseline by and, critically, marginally exceeding the performance of the FFT model (). The LoRA approach reduced the number of trainable parameters by (to million) and decreased training time by. Our hyperparameter analysis identified an optimal configuration of and, demonstrating that maximum performance does not require maximum parameter allocation.

Conclusion: LLMs are demonstrably superior for nuanced affective analysis. The PEFT-LoRA approach successfully overcomes the computational barrier, making state-of-the-art affective computing accessible and scalable. This efficiency enables the rapid development of specialized, low-latency AI agents, although future work must address the critical challenge of expanding to multimodal data and mitigating inherent model biases.

References

Alswaidy, M., Aslan, H. A., Naji, M. A., & Alja'am, J. M. (2023). A systematic review of text sentiment analysis techniques. *Journal of Big Data, 10*(1), 1–33.

Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. *Robotics and Autonomous Systems, 57*(5), 471–482.

Baker, T. L., & Shiffman, S. (2004). The "who, what, when, where, and why" of alcohol use in the natural environment. *Alcohol Research & Health, 28*(4), 169–173.

Baltrusaitis, T., Morency, L.-P., & Black, M. J. (2018). OpenFace 2.0: Facial behavior analysis toolkit. *IEEE Transactions on Affective Computing, 10*(4), 502–513.

Calvo, R. A., & D'Mello, S. (2010). Affective computing and education: Learning and interacting with emotional machines. *IEEE Transactions on Learning Technologies, 3*(2), 99–111.

Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., & Joulin, A. (2021). Emerging properties in self-supervised vision transformers. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, 650–660.

Chen, Z., Zhao, Y., Feng, Y., Sun, X., & Xu, K. (2020). Cross-modal attention for video sentiment analysis with multimodal feature fusion. *Multimedia Tools and Applications, 79*(11–12), 7935–7953.

Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2021). LoRA: Low-Rank Adaptation of Large Language Models. *arXiv preprint arXiv:2106.09685.*

Huang, Y., Liu, D., He, T., Yin, T., He, T., Liu, Z., Deng, D., Zhou, M., & Yang, P. (2019). Efficiently exploring neural network architectures in large search space. *arXiv preprint arXiv:1901.07152.*

Kim, S., Wang, Y., Zhang, W., Chen, J., & Ma, X. (2023). Multimodal large language models for medical diagnostics: A systematic review. *Journal of Biomedical Informatics, 142*, 104374.

Kulkarni, A. D., & Kulkarni, A. P. (2019). Real-time human emotion detection using facial expressions and convolutional neural networks. *Multimedia Tools and Applications, 78*(13), 18017–18035.

Li, H., & Chen, J. (2022). Graph-based text representation learning for emotion recognition. *Expert Systems with Applications, 194*, 116544.

Li, Y., Wu, Z., Shi, J., Zhao, H., & Zhou, B. (2022). Multimodal sentiment analysis with contrastive learning and cross-modal attention. *Information Fusion, 81*, 1–13.

Liang, P., Wu, H., Zhang, W., & Chen, Y. (2023). Reducing the environmental and economic cost of large language model training. *Nature Energy, 8*, 641–649.

Liu, B., Chen, S., & Li, Y. (2023). Vision-language pre-training for medical image analysis: A review. *IEEE Transactions on Medical Imaging, 42*(7), 2154–2169.

Liu, X., Zheng, Y., Yang, Q., & Wang, Q. (2021). Hierarchical attention network for emotion recognition in conversation. *Information Processing & Management, 58*(1), 102409.

Ma, F., Wang, X., Feng, Z., & Chen, Y. (2022). Context-aware emotion recognition in conversation via graph attention networks. *Neurocomputing, 476*, 28–39.

Majumder, N., Poria, S., Hazarika, D., Gelbukh, A., Cambria, E., & Schuller, B. (2019). Sentiment and emotion analysis in conversational agents. *ACM Transactions on Interactive Intelligent Systems, 9*(4), 1–34.

Mueller, H., & Zhang, Y. (2020). Transformer-based toxicity detection with attention-based feature fusion. *Pattern Recognition Letters, 131*, 280–286.

Ning, S., Zhang, H., Liu, S., Shi, Z., & Li, X. (2023). State-of-the-art parameter-efficient fine-tuning for large language models. *ACM Computing Surveys (CSUR), 55*(9), 1–36.

Perez-Rosas, V., Mihalcea, R., & Morency, L.-P. (2017). Computational analysis of deceptive sentiment. *IEEE Transactions on Affective Computing, 8*(3), 362–374.

Poria, S., Cambria, E., Hazarika, D., & Majumder, N. (2017). SenticNet 5.0: Enhancing affective computing with analogy-based common-sense reasoning. *IEEE Transactions on Affective Computing, 10*(2), 195–207.

Risch, J., & Krestel, R. (2018). Sentiment analysis in historical texts. *Expert Systems with Applications, 114*, 468–479.

Romei, A., & Gambardella, L. M. (2009). A survey of evolutionary data mining. *Expert Systems with Applications, 36*(7), 442–452.

Salminen, M., Riekkinen, K., & Hautamäki, A. (2022). Ethical considerations in emotion AI deployment: A systematic review. *AI & Society, 37*(4), 1321–1339.

Schuller, B., Steidl, S., Batliner, A., Vinciarelli, A., Scherer, K., Ringeval, F., Wilson, S., & Baird, A. (2013). The INTERSPEECH 2013 computational paralinguistics challenge: Social signals, conflict, emotion, and personality. *Proceedings of INTERSPEECH 2013*, 1–5.

Sun, H., Li, T., & Wang, Y. (2021). Multimodal emotion recognition with fusion of textual and visual features. *Expert Systems with Applications, 167*, 114170.

Tang, D., Qin, B., & Liu, T. (2015). Document modeling with gated recurrent neural network for sentiment classification. *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, 1388–1397.

Tumasjan, A., Sprenger, T. O., Sandner, P. G., & Welpe, I. M. (2010). Predicting elections with Twitter: What 140 characters reveal about political sentiment. *Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media (ICWSM)*, 178–185.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. *Advances in Neural Information Processing Systems (NeurIPS 2017), 30*, 5998–6008.

Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)*, 4171–4186.

Liu, Y., Ott, M., Goyal, N., Du, J., Li, M., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. *arXiv preprint arXiv:1907.11692.

Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Le, Q. V., & Salakhutdinov, R. (2019). XLNet: Generalized autoregressive pretraining for language understanding. *Advances in Neural Information Processing Systems (NeurIPS 2019).

Samantapudi, R. K. R. (2025). Advantages & impact of fine tuning large language models for ecommerce search. Journal of Information Systems Engineering and Management, 10(45s), 600–622. https://doi.org/10.52783/jisem.v10i45s.8898

Srilatha, S. (2025). Integrating AI into enterprise content management systems: A roadmap for intelligent automation. Journal of Information Systems Engineering and Management, 10(45s), 672–688. https://doi.org/10.52783/jisem.v10i45s.8904

Rangu, S. (2025). Analyzing the impact of AI-powered call center automation on operational efficiency in healthcare. Journal of Information Systems Engineering and Management, 10(45s), 666–689. https://doi.org/10.55278/jisem.2025.10.45s.666

Downloads

Published

2025-10-30

How to Cite

Deep Contextual Understanding: A Parameter-Efficient Large Language Model Approach To Fine-Grained Affective Computing. (2025). International Journal of Advanced Artificial Intelligence Research, 2(10), 38-51. https://aimjournals.com/index.php/ijaair/article/view/314

How to Cite

Deep Contextual Understanding: A Parameter-Efficient Large Language Model Approach To Fine-Grained Affective Computing. (2025). International Journal of Advanced Artificial Intelligence Research, 2(10), 38-51. https://aimjournals.com/index.php/ijaair/article/view/314

Similar Articles

1-10 of 25

You may also start an advanced similarity search for this article.