ISSN :3049-2297

Developing Explainable and Ethical AI Chatbots for Healthcare Decision Support Systems

Review Article (Published On: 26-Oct-2025 )

Dr. Prakash Chokkamreddy

Jou. Artif. Intell. Auto. Intell., 2 (3):375-385

Dr. Prakash Chokkamreddy : Department of MBA, School of Management Studies, Guru Nanak Institutions Technical Campus

Download PDF Here

Article History: Received on: 27-Oct-25, Accepted on: 14-Dec-25, Published on: 26-Oct-25

Corresponding Author: Dr. Prakash Chokkamreddy

Email: chokkamprakashreddy@gmail.com

Citation: Dr. Prakash Chokkamreddy (2025). Developing Explainable and Ethical AI Chatbots for Healthcare Decision Support Systems. Jou. Artif. Intell. Auto. Intell., 2 (3 ):375-385


Abstract

    

Challenging clinicians and patients, AI chatbots are changing the nature of healthcare by

offering interactive decision support. Their adoption however, hinges significantly on ex-

plainability and ethical integrity so as to lead to trust, accountability and patient safety.

This paper is a systematic framework on how explainable and ethical AI chatbots could

be developed to support health system decision support systems. The model incorporates

superior explainability methods like LIME and SHAP with high ethical standards including

fairness, transparency, privacy, and informed consent. Overall development, and assessment

is through inclusive stakeholder engagement. The prototype chatbot has secure data process-

ing and clear user interfaces. Strict assessment has shown that the accuracy, usability, trust,

and ethical standards have improved, and complex data interpretation and bias reduction have

been identified as a problem. The future competitive needs include adaptive AI, better gov-

ernance, and expansion of clinical integration. The piece contributes towards the responsible

adoption of AI in healthcare, which creates safer and more equitable digital health solutions.

Reference

   

1] Abdul A, Vermeulen J, Wang D, Lim BY, Kankanhalli M. Trends and Trajectories for

Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proc 2018

ACM Hum-Comput Interact. ACMDL. 2018:1-18.

[2] Bickmore TW, Trinh H, Olafsson S, O’Leary TK, Asadi R, et al. Patient and Consumer Safety

Risks When Using Conversational Assistants for Medical Information: An Observational

Study of Siri Alexa and Google Assistant. J Med Internet Res. 2018;20:e11510.

[3] Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, et al. Intelligible Models for Healthcare:

Predicting Pneumonia Risk and Hospital 30-Day Readmission. In: Proceedings of the 21st

ACM SIGKDD international conference on knowledge discovery and data mining. New York.

USA: ACM. 2015:1721-1730.

[4] Carvalho DV, Pereira EM, Cardoso JS. Machine Learning Interpretabil-Ity: A Survey on

Methods and Metrics. Electronics. 2019;8:832.


Statistics

   Article View: 49
   PDF Downloaded: 4