Real-time clinical decision support with explainable AI: Balancing accuracy and interpretability in high-stakes environments

Faheem, Muhammad and Iqbal, Aqib (2025) Real-time clinical decision support with explainable AI: Balancing accuracy and interpretability in high-stakes environments. International Journal of Science and Research Archive, 16 (1). pp. 1204-1220. ISSN 2582-8185

Abstract

Artificial intelligence (AI) is rapidly changing the face of healthcare by reshaping the way healthcare decisions are made because it provides a new Era of Data-driven healthcare decision making, especially in high-reputation companies that require very quick and trustworthy decisions. There are also, however, serious obstacles to clinical adoption of many AI models, because they are opaque, and this requires the use of explainable AI (XAI) methods. In this paper, the architecture, challenges, and applications of real-time clinical decision support systems (RT-CDSS) augmented with XAI are considered. Based on the case studies of imaging analytics, dementia prediction, and pharmacovigilance, the study examines the effects of explainability on trust, safety, as well as system usability. Major concerns covered are whether there is a trade-off between model performance and interpretability, what technical and organizational obstacles to deployment there are, and what the ethical and regulatory environment suggests regarding making AI interpretable in the clinical context. Patient-centered outcomes are also assessed along with process evaluation measures, including SHAP, LIME, and time concerning trust in clinicians. Lastly, the paper explains the emerging trends such as human-in-the-loop structures, federated learning, and consortia functions, and presents a roadmap to realize the development of RT-CDSS not just accurate but also accountable, intelligible, and ethically compliant. These results highlight the need to develop AI technology that could be easily incorporated into clinical practice to promote transparent, semitransparent, and safe medical decisions.

Item Type: Article
Official URL: https://doi.org/10.30574/ijsra.2025.16.1.1992
Uncontrolled Keywords: Explainable Artificial Intelligence (XAI); Real-Time Clinical Decision Support Systems (RT-CDSS); Medical AI Interpretability; Trust in Healthcare AI; Human-In-The-Loop Decision Support
Date Deposited: 01 Sep 2025 12:23
Related URLs:
URI: https://eprint.scholarsrepository.com/id/eprint/4576