Deep Contextual Understanding: A Parameter-Efficient Large Language Model Approach To Fine-Grained Affective Computing
Abstract
Background: Traditional methods in Affective Computing often fail to capture the subtle, context-dependent shifts necessary for fine-grained emotion classification due to limited semantic understanding and high reliance on hand-crafted features. While Large Language Models (LLMs) offer superior contextual depth, their immense computational cost hinders domain-specific fine-tuning and practical deployment.
Methods: This study leverages a pre-trained Transformer-based LLM (comparable to RoBERTa-Large) and applies a Parameter-Efficient Fine-Tuning (PEFT) methodology, specifically Low-Rank Adaptation (LoRA), to a complex, multi-label dataset of 11 discrete emotional states. We systematically compare the performance of LoRA against a traditional Bi-LSTM baseline and a Full Fine-Tuning (FFT) LLM, while also conducting a detailed ablation study on LoRA's rank () and scaling factor () to determine the optimal balance between performance and efficiency.
Results: The LLM (PEFT-LoRA) model achieved a decisive performance increase, resulting in a score, outperforming the Bi-LSTM baseline by and, critically, marginally exceeding the performance of the FFT model (). The LoRA approach reduced the number of trainable parameters by (to million) and decreased training time by. Our hyperparameter analysis identified an optimal configuration of and, demonstrating that maximum performance does not require maximum parameter allocation.
Conclusion: LLMs are demonstrably superior for nuanced affective analysis. The PEFT-LoRA approach successfully overcomes the computational barrier, making state-of-the-art affective computing accessible and scalable. This efficiency enables the rapid development of specialized, low-latency AI agents, although future work must address the critical challenge of expanding to multimodal data and mitigating inherent model biases.
Keywords
References
Similar Articles
- John M. Davenport, AI-AUGMENTED FRAMEWORKS FOR DATA QUALITY VALIDATION: INTEGRATING RULE-BASED ENGINES, SEMANTIC DEDUPLICATION, AND GOVERNANCE TOOLS FOR ROBUST LARGE-SCALE DATA PIPELINES , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 08 (2025): Volume 02 Issue 08
- Dr. Alessia Romano, Prof. Marco Bianchi, DEVELOPING AI ASSISTANCE FOR INCLUSIVE COMMUNICATION IN ITALIAN FORMAL WRITING , International Journal of Advanced Artificial Intelligence Research: Vol. 1 No. 01 (2024): Volume 01 Issue 01
- Dr. Elias A. Petrova, AN EDGE-INTELLIGENT STRATEGY FOR ULTRA-LOW-LATENCY MONITORING: LEVERAGING MOBILENET COMPRESSION AND OPTIMIZED EDGE COMPUTING ARCHITECTURES , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 10 (2025): Volume 02 Issue 10
- Bagus Candra, Minh Thu Nguyen, A Comprehensive Evaluation Of Shekar: An Open-Source Python Framework For State-Of-The-Art Persian Natural Language Processing And Computational Linguistics , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 10 (2025): Volume 02 Issue 10
- Mason Johnson, Forging Rich Multimodal Representations: A Survey of Contrastive Self-Supervised Learning , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 11 (2025): Volume 02 Issue 11
- Dr. Mei-Ling Zhou, Dr. Haojie Xu, LEARNING RICH FEATURES WITHOUT LABELS: CONTRASTIVE APPROACHES IN MULTIMODAL ARTIFICIAL INTELLIGENCE SYSTEMS , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 04 (2025): Volume 02 Issue 04
- Dr. Alejandro Moreno, An Explainable, Context-Aware Zero-Trust Identity Architecture for Continuous Authentication in Hybrid Device Ecosystems , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 11 (2025): Volume 02 Issue 11
- Michael Andersson, Optimizing Continuous Schema Evolution and Zero-Downtime Microservices in Enterprise Data Architectures , International Journal of Advanced Artificial Intelligence Research: Vol. 3 No. 01 (2026): Volume 03 Issue 01
- Prof. Michael T. Edwards, ENHANCING AI-CYBERSECURITY EDUCATION: DEVELOPMENT OF AN AI-BASED CYBERHARASSMENT DETECTION LABORATORY EXERCISE , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 02 (2025): Volume 02 Issue 02
- Elena Volkova, Emily Smith, INVESTIGATING DATA GENERATION STRATEGIES FOR LEARNING HEURISTIC FUNCTIONS IN CLASSICAL PLANNING , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 04 (2025): Volume 02 Issue 04
You may also start an advanced similarity search for this article.