Open Access

EXPLAINABLE ARTIFICIAL INTELLIGENCE AS A FOUNDATION FOR SUSTAINABLE, TRUSTWORTHY, AND HUMAN-CENTRIC DECISION-MAKING ACROSS CONSUMER, SUPPLY CHAIN, AND HEALTHCARE DOMAINS

4 University of Toronto, Canada

Abstract

Explainable Artificial Intelligence (XAI) has emerged as a critical paradigm in contemporary artificial intelligence research and practice, responding to growing concerns about transparency, accountability, trust, and ethical responsibility in algorithmic decision-making. As artificial intelligence systems increasingly permeate consumer markets, supply chains, and healthcare ecosystems, the opacity of complex machine learning models has raised fundamental challenges for organizational legitimacy, regulatory compliance, and user acceptance. This research article develops an integrative, theory-driven, and empirically grounded analysis of XAI as a strategic enabler of sustainable growth and responsible innovation across multiple high-impact domains. Drawing strictly on the provided body of literature, the article synthesizes insights from consumer packaged goods retailing, e-commerce, supply chain cyber resilience, healthcare analytics, and humanโ€“computer interaction to construct a unified conceptual framework explaining how explainability mechanisms influence trust formation, decision quality, and long-term organizational value creation.

The study positions XAI not merely as a technical enhancement but as a socio-technical intervention that reshapes power relations between algorithmic systems and human stakeholders. By examining prior empirical findings and theoretical models, the article elucidates how explainability contributes to cognitive understanding, affective reassurance, and moral legitimacy among users, employees, managers, and patients. Particular attention is given to the role of explainability in mitigating algorithmic bias, complying with data protection regulations such as the General Data Protection Regulation, and supporting agile decision-making under conditions of uncertainty and risk. The analysis further explores sector-specific dynamics, demonstrating how XAI adoption differs in consumer-centric business practices, cyber-resilient supply chains, and clinical decision support systems.

Methodologically, the article adopts a qualitative, integrative research design based on systematic theoretical elaboration and cross-domain synthesis of empirical findings reported in the referenced studies. Rather than introducing new datasets or computational experiments, the research advances knowledge by deeply interpreting existing evidence and identifying latent patterns, tensions, and unresolved questions within the literature. The findings suggest that XAI enhances sustainable growth by aligning algorithmic intelligence with human values, fostering trust-based relationships, and enabling informed oversight of automated decisions. However, the discussion also highlights persistent limitations, including cognitive overload, context-dependence of explanations, and the risk of superficial transparency.

The article concludes by outlining future research directions and managerial implications, emphasizing the need for interdisciplinary collaboration, user-centric design, and regulatory-aware implementation strategies. Overall, this work contributes a comprehensive and publication-ready scholarly perspective on XAI as a cornerstone of sustainable, ethical, and human-centered artificial intelligence.

Keywords

References

๐Ÿ“„ Behera, R. K., Bala, P. K., & Rana, N. P. (2023). Creation of sustainable growth with explainable artificial intelligence: An empirical insight from consumer packaged goods retailers. Journal of Cleaner Production, 399.
๐Ÿ“„ Behera, R. K., Bala, P. K., & Rana, N. P. (2023). Creation of sustainable growth with explainable artificial intelligence: An empirical insight from consumer packaged goods firms. Elsevier.
๐Ÿ“„ Bernardo, E., & Seva, R. (2023). Affective design analysis of explainable artificial intelligence (XAI): A user-centric perspective. Informatics, 10(1).
๐Ÿ“„ Binns, R., et al. (2018). Understanding the role of transparency in the use of machine learning in healthcare. Proceedings of the CHI Conference on Human Factors in Computing Systems.
๐Ÿ“„ Chaudhary, M., et al. (2024). Introduction to explainable AI (XAI) in e-commerce. Springer.
๐Ÿ“„ Dastin, J. (2018). Amazon scrapped AI recruiting tool that showed bias against women. Reuters.
๐Ÿ“„ European Commission. (2018). General Data Protection Regulation.
๐Ÿ“„ Hulsen, T. (2022). Literature analysis of artificial intelligence in biomedicine. Annals of Translational Medicine, 10.
๐Ÿ“„ Hulsen, T., et al. (2019). From big data to precision medicine. Frontiers in Medicine, 6.
๐Ÿ“„ Hulsen, T., et al. (2022). From big data to better patient outcomes. Clinical Chemistry and Laboratory Medicine, 61.
๐Ÿ“„ Joiner, I. A. (2018). Artificial intelligence: AI is nearby. Chandos Publishing.
๐Ÿ“„ Machlev, R., et al. (2022). Explainable artificial intelligence techniques for energy and power systems: Review, challenges and opportunities. Energy and AI, 9.
๐Ÿ“„ Mahbooba, B., et al. (2021). Explainable artificial intelligence (XAI) to enhance trust management in intrusion detection systems using decision tree model. Complexity.
๐Ÿ“„ Sadeghi, R. K., et al. (2024). Explainable artificial intelligence and agile decision-making in supply chain cyber resilience. Decision Support Systems, 180.
๐Ÿ“„ Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146.
๐Ÿ“„ Somashekhar, S. P., et al. (2018). IBM Watson for Oncology and its impact on patient care. Journal of Oncology Practice, 14(7).
๐Ÿ“„ Trivedi, S. (2024). Explainable artificial intelligence in consumer-centric business practices and approaches. IGI Global.
๐Ÿ“„ Yu, K.-H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature Biomedical Engineering, 2.
๐Ÿ“„ Yu, L., & Li, Y. (2022). Artificial intelligence decision-making transparency and employeesโ€™ trust. Behavioral Sciences, 12(5).
๐Ÿ“„ Zhang, Y., Liu, H., & Li, Z. (2019). Explainable AI for medical imaging: A case study in pathology. Journal of Pathology Informatics, 10.
๐Ÿ“„ Vance, E. R., & Choi, S. J. (2025). A machine learning framework for predicting cardiovascular disease risk. International Journal of Modern Computer Science and IT Innovations, 2(10).

Similar Articles

1-10 of 23

You may also start an advanced similarity search for this article.