
ALIGNING EXPLAINABLE AI WITH USER NEEDS: A PROPOSAL FOR A PREFERENCE-AWARE EXPLANATION FUNCTION
Abstract
The rapid advancement and widespread deployment of Artificial Intelligence (AI) models, particularly deep neural networks, have led to remarkable successes across diverse domains. However, the inherent "black-box" nature of many high-performing models poses significant challenges, including a lack of transparency, trust, and accountability. Explainable Artificial Intelligence (XAI) aims to bridge this gap by making AI decisions understandable to humans. While numerous XAI methods have emerged, a crucial aspect often overlooked is the diverse and context-dependent nature of user preferences for explanations. A generic explanation may not suffice for all users or all decision-making scenarios. This article proposes a conceptual framework centered around a mapping function designed to adapt explanation generation to specific user profiles, contextual factors, and AI model characteristics. We review the landscape of XAI, analyze the varying needs of stakeholders, and detail the proposed mapping function's inputs, logic, and outputs. This user-centric approach promises to enhance the utility, trustworthiness, and effectiveness of XAI systems, fostering broader adoption and responsible AI deployment. We conclude by outlining key challenges and future research directions necessary to realize this vision.
Keywords
Explainable artificial intelligence (XAI), user-centered AI, preference-aware explanation, human-AI interaction
References
Article Statistics
Downloads
Copyright License
Authors retain the copyright of their manuscripts, and all Open Access articles are disseminated under the terms of the Creative Commons Attribution License 4.0 (CC-BY), which licenses unrestricted use, distribution, and reproduction in any medium, provided that the original work is appropriately cited. The use of general descriptive names, trade names, trademarks, and so forth in this publication, even if not specifically identified, does not imply that these names are not protected by the relevant laws and regulations.