Open Access

Architecting Trustworthy and Equitable Artificial Intelligence in Clinical Research and Care: Ethical, Regulatory, and Workforce Imperatives for Responsible Translation

4 Department of Biomedical Informatics University of Edinburgh, United Kingdom

Abstract

Artificial intelligence (AI) and machine learning (ML) technologies are increasingly integrated into clinical research and healthcare delivery. While these tools promise improved diagnostic precision, operational efficiency, and personalized interventions, they introduce profound ethical, regulatory, and equity-related challenges.

This study develops a comprehensive conceptual framework for the responsible integration of AI/ML in clinical research and care, emphasizing interpretability, governance, reporting standards, workforce diversity, participant engagement, and health equity.

A structured narrative synthesis was conducted using foundational scholarship on clinical ML applications, responsible AI frameworks, AI-specific reporting guidelines, regulatory proposals, stakeholder engagement models, and diversity initiatives within biomedical research. Theoretical constructs were integrated across ethical, clinical, regulatory, and sociotechnical domains to produce an implementation-oriented analytical framework.

 

Responsible AI in clinical contexts requires multidimensional alignment across five domains: algorithmic transparency and interpretability; regulatory adaptability; rigorous reporting and evaluation standards; participant-centered engagement and health literacy; and systemic investment in workforce diversity. The analysis demonstrates that technical robustness alone is insufficient for trustworthy deployment. Instead, trust emerges from transparent validation, participatory governance, equitable representation in data and research teams, and ethically grounded clinical decision support integration.

AI-enabled clinical research and care must be governed by principles that extend beyond computational performance. Regulatory innovation, structured reporting, stakeholder-centric engagement, and diversification of the biomedical workforce are mutually reinforcing pillars of responsible AI ecosystems. Sustainable implementation demands systemic transformation rather than incremental technological adoption.

Keywords

References

📄 Abbidi, S.R., Sinha, D. AI/ML-based strategies for enhancing equity, diversity, and inclusion in randomized clinical trials. Trials (2026). https://doi.org/10.1186/s13063-026-09537-2
📄 Agarwal, R., Anderson, C., Zarate, A., & Ward, M. Responsible AI: a framework for building trust in AI systems. Harvard Data Science Review, 2(3) (2020).
📄 FDA. Artificial intelligence and machine learning (AI/ML)–enabled medical devices: Proposed Regulatory Framework (2021).
📄 Gilchrist, S.C. et al. Research goes red: early experience with a participant-centric registry. Circulation Research, 130, 343–351 (2022).
📄 Goodman, K.W., Stern, A.D., & Mathews, S.C. Ethical and regulatory challenges of artificial intelligence in clinical care. Nature Medicine, 26(6), 908–914 (2020).
📄 Johnson, K.S. et al. Diversifying the research workforce as a programmatic priority for a career development award program at Duke University. Academic Medicine, 96, 836–841 (2021).
📄 Liu, X., Cruz Rivera, S., Moher, D., Calvert, M.J., & Denniston, A.K. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT–AI extension. Nature Medicine, 28, 306–314 (2022).
📄 Martínez, J., Piersol, C.V., Lucas, K., & Leland, N.E. Operationalizing stakeholder engagement through the stakeholder-centric engagement charter (SCEC). Journal of General Internal Medicine, 37, 105–108 (2022).
📄 Mesko, B., Hetényi, G., & Győrffy, Z. Will artificial intelligence solve the human resource crisis in healthcare? BMC Health Services Research, 21, 75 (2021).
📄 Moore, T.J., & Brook, R.H. Challenges and opportunities in the regulation of artificial intelligence–based medical devices. Journal of the American Medical Association, 324(6), 513–514 (2020).
📄 National Academies of Sciences, Engineering, and Medicine. Adoption of Health Literacy Best Practices to Enhance Clinical Research and Community Participation (2022a).
📄 National Academies of Sciences, Engineering, and Medicine. Improving Representation in Clinical Trials and Research: Building Research Equity for Women and Underrepresented Groups (2022b).
📄 Nikaj, S. et al. Examining trends in the diversity of the U.S. National Institutes of Health participating and funded workforce. FASEB Journal, 32, 6410–6422 (2018).
📄 Sayeed, S. et al. Return of individual research results: what do participants prefer and expect? PLoS One, 16, e0254153 (2021).
📄 Shen, Y., Wang, Y., & Yang, J. Regulatory strategies for artificial intelligence in health care: challenges and opportunities. Clinical Pharmacology & Therapeutics, 110(4), 1019–1025 (2021).
📄 Shortliffe, E.H., & Sepúlveda, M.J. Clinical decision support in the era of artificial intelligence. Journal of the American Medical Association, 320(21), 2199–2200 (2018).
📄 Valantine, H.A., & Collins, F.S. National Institutes of Health addresses the science of diversity. Proceedings of the National Academy of Sciences USA, 112, 12240–12242 (2015).
📄 Valantine, H.A., Lund, P.K., & Gammie, A.E. From the NIH: a systems approach to increasing the diversity of the biomedical research workforce. CBE Life Sciences Education, 15, fe4 (2016).
📄 Watson, D.S., Krutzinna, J., Bruce, I.N., Griffiths, C.E., McInnes, I.B., Barnes, M.R., & Floridi, L. Clinical applications of machine learning algorithms: beyond the black box. BMJ, 364, l886 (2021).

Similar Articles

1-10 of 27

You may also start an advanced similarity search for this article.