ALIGNING EXPLAINABLE AI WITH USER NEEDS: A PROPOSAL FOR A PREFERENCE-AWARE EXPLANATION FUNCTION
Abstract
The rapid advancement and widespread deployment of Artificial Intelligence (AI) models, particularly deep neural networks, have led to remarkable successes across diverse domains. However, the inherent "black-box" nature of many high-performing models poses significant challenges, including a lack of transparency, trust, and accountability. Explainable Artificial Intelligence (XAI) aims to bridge this gap by making AI decisions understandable to humans. While numerous XAI methods have emerged, a crucial aspect often overlooked is the diverse and context-dependent nature of user preferences for explanations. A generic explanation may not suffice for all users or all decision-making scenarios. This article proposes a conceptual framework centered around a mapping function designed to adapt explanation generation to specific user profiles, contextual factors, and AI model characteristics. We review the landscape of XAI, analyze the varying needs of stakeholders, and detail the proposed mapping function's inputs, logic, and outputs. This user-centric approach promises to enhance the utility, trustworthiness, and effectiveness of XAI systems, fostering broader adoption and responsible AI deployment. We conclude by outlining key challenges and future research directions necessary to realize this vision.
Keywords
Similar Articles
- Dr. Arvind Patel, Anamika Mishra, INTELLIGENT BARGAINING AGENTS IN DIGITAL MARKETPLACES: A FUSION OF REINFORCEMENT LEARNING AND GAME-THEORETIC PRINCIPLES , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 03 (2025): Volume 02 Issue 03
- Dr. Elara V. Sorenson, Deep Contextual Understanding: A Parameter-Efficient Large Language Model Approach To Fine-Grained Affective Computing , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 10 (2025): Volume 02 Issue 10
You may also start an advanced similarity search for this article.