Deep Contextual Understanding: A Parameter-Efficient Large Language Model Approach To Fine-Grained Affective Computing
Abstract
Background: Traditional methods in Affective Computing often fail to capture the subtle, context-dependent shifts necessary for fine-grained emotion classification due to limited semantic understanding and high reliance on hand-crafted features. While Large Language Models (LLMs) offer superior contextual depth, their immense computational cost hinders domain-specific fine-tuning and practical deployment.
Methods: This study leverages a pre-trained Transformer-based LLM (comparable to RoBERTa-Large) and applies a Parameter-Efficient Fine-Tuning (PEFT) methodology, specifically Low-Rank Adaptation (LoRA), to a complex, multi-label dataset of 11 discrete emotional states. We systematically compare the performance of LoRA against a traditional Bi-LSTM baseline and a Full Fine-Tuning (FFT) LLM, while also conducting a detailed ablation study on LoRA's rank () and scaling factor () to determine the optimal balance between performance and efficiency.
Results: The LLM (PEFT-LoRA) model achieved a decisive performance increase, resulting in a score, outperforming the Bi-LSTM baseline by and, critically, marginally exceeding the performance of the FFT model (). The LoRA approach reduced the number of trainable parameters by (to million) and decreased training time by. Our hyperparameter analysis identified an optimal configuration of and, demonstrating that maximum performance does not require maximum parameter allocation.
Conclusion: LLMs are demonstrably superior for nuanced affective analysis. The PEFT-LoRA approach successfully overcomes the computational barrier, making state-of-the-art affective computing accessible and scalable. This efficiency enables the rapid development of specialized, low-latency AI agents, although future work must address the critical challenge of expanding to multimodal data and mitigating inherent model biases.
Keywords
References
Similar Articles
- Sara Rossi, Samuel Johnson, NEUROSYMBOLIC AI: MERGING DEEP LEARNING AND LOGICAL REASONING FOR ENHANCED EXPLAINABILITY , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 06 (2025): Volume 02 Issue 06
- Dr. Jae-Won Kim, Dr. Sung-Ho Lee, NAVIGATING ALGORITHMIC EQUITY: UNCOVERING DIVERSITY AND INCLUSION INCIDENTS IN ARTIFICIAL INTELLIGENCE , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 07 (2025): Volume 02 Issue 07
- Dwi Jatmiko, Huu Nguyen, AI-Guided Policy Learning For Hyperdimensional Sampling: Exploiting Expert Human Demonstrations From Interactive Virtual Reality Molecular Dynamics , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 10 (2025): Volume 02 Issue 10
- Dr. Elias T. Vance, Prof. Camille A. Lefevre, ENHANCING TRUST AND CLINICAL ADOPTION: A SYSTEMATIC LITERATURE REVIEW OF EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) APPLICATIONS IN HEALTHCARE , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 10 (2025): Volume 02 Issue 10
- Angelo soriano, Sheila Ann Mercado, The Convergence of AI And UVM: Advanced Methodologies for the Verification of Complex Low-Power Semiconductor Architectures , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 11 (2025): Volume 02 Issue 11
- Dr. Elena M. Ruiz, Integrating Big Data Architectures and AI-Powered Analytics into Mergers & Acquisitions Due Diligence: A Theoretical Framework for Value Measurement, Risk Detection, and Strategic Decision-Making , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 09 (2025): Volume 02 Issue 09
You may also start an advanced similarity search for this article.