ALIGNING EXPLAINABLE AI WITH USER NEEDS: A PROPOSAL FOR A PREFERENCE-AWARE EXPLANATION FUNCTION
Abstract
The rapid advancement and widespread deployment of Artificial Intelligence (AI) models, particularly deep neural networks, have led to remarkable successes across diverse domains. However, the inherent "black-box" nature of many high-performing models poses significant challenges, including a lack of transparency, trust, and accountability. Explainable Artificial Intelligence (XAI) aims to bridge this gap by making AI decisions understandable to humans. While numerous XAI methods have emerged, a crucial aspect often overlooked is the diverse and context-dependent nature of user preferences for explanations. A generic explanation may not suffice for all users or all decision-making scenarios. This article proposes a conceptual framework centered around a mapping function designed to adapt explanation generation to specific user profiles, contextual factors, and AI model characteristics. We review the landscape of XAI, analyze the varying needs of stakeholders, and detail the proposed mapping function's inputs, logic, and outputs. This user-centric approach promises to enhance the utility, trustworthiness, and effectiveness of XAI systems, fostering broader adoption and responsible AI deployment. We conclude by outlining key challenges and future research directions necessary to realize this vision.
Keywords
Similar Articles
- Olabayoji Oluwatofunmi Oladepo., Explainable Artificial Intelligence in Socio-Technical Contexts: Addressing Bias, Trust, and Interpretability for Responsible Deployment , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 09 (2025): Volume 02 Issue 09
- Dr. Elias T. Vance, Prof. Camille A. Lefevre, ENHANCING TRUST AND CLINICAL ADOPTION: A SYSTEMATIC LITERATURE REVIEW OF EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) APPLICATIONS IN HEALTHCARE , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 10 (2025): Volume 02 Issue 10
- Olabayoji Oluwatofunmi Oladepo., Opeyemi Eebru Alao, EXPLAINABLE MACHINE LEARNING FOR FINANCIAL ANALYSIS , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 07 (2025): Volume 02 Issue 07
- Dr. Alejandro Moreno, An Explainable, Context-Aware Zero-Trust Identity Architecture for Continuous Authentication in Hybrid Device Ecosystems , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 11 (2025): Volume 02 Issue 11
- Dr. Alessia Romano, Prof. Marco Bianchi, DEVELOPING AI ASSISTANCE FOR INCLUSIVE COMMUNICATION IN ITALIAN FORMAL WRITING , International Journal of Advanced Artificial Intelligence Research: Vol. 1 No. 01 (2024): Volume 01 Issue 01
- Dr. Lukas Reinhardt, Next-Generation Security Operations Centers: A Holistic Framework Integrating Artificial Intelligence, Federated Learning, and Sustainable Green Infrastructure for Proactive Threat Mitigation , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 09 (2025): Volume 02 Issue 09
- Dr. Kenji Yamamoto, Prof. Lijuan Wang, LEVERAGING DEEP LEARNING IN SURVIVAL ANALYSIS FOR ENHANCED TIME-TO-EVENT PREDICTION , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 05 (2025): Volume 02 Issue 05
- Sara Rossi, Samuel Johnson, NEUROSYMBOLIC AI: MERGING DEEP LEARNING AND LOGICAL REASONING FOR ENHANCED EXPLAINABILITY , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 06 (2025): Volume 02 Issue 06
- Ashis Ghosh, FAILURE-AWARE ARTIFICIAL INTELLIGENCE: DESIGNING SYSTEMS THAT DETECT, CATEGORIZE, AND RECOVER FROM OPERATIONAL FAILURES , International Journal of Advanced Artificial Intelligence Research: Vol. 3 No. 01 (2026): Volume 03 Issue 01
- Dr. Jae-Won Kim, Dr. Sung-Ho Lee, NAVIGATING ALGORITHMIC EQUITY: UNCOVERING DIVERSITY AND INCLUSION INCIDENTS IN ARTIFICIAL INTELLIGENCE , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 07 (2025): Volume 02 Issue 07
You may also start an advanced similarity search for this article.