Explainable AI (XAI) for trustworthy and transparent decision-making: A theoretical framework for AI interpretability

Chinnaraju, Arunraju (2025) Explainable AI (XAI) for trustworthy and transparent decision-making: A theoretical framework for AI interpretability. World Journal of Advanced Engineering Technology and Sciences, 14 (3). pp. 170-207. ISSN 2582-8266

[thumbnail of WJAETS-2025-0106.pdf] Article PDF
WJAETS-2025-0106.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download ( 1MB)

Abstract

Explainable Artificial Intelligence (XAI) has become a critical area of research in addressing the black-box nature of complex AI models, particularly as these systems increasingly influence high-stakes domains such as healthcare, finance, and autonomous systems. This study presents a theoretical framework for AI interpretability, offering a structured approach to understanding, implementing, and evaluating explainability in AI-driven decision-making. By analyzing key XAI techniques, including LIME, SHAP, and DeepLIFT, the research categorizes explanation methods based on scope, timing, and dependency on model architecture, providing a novel taxonomy for understanding their applicability across different use cases. Integrating insights from cognitive theories, the framework highlights how human comprehension of AI decisions can be enhanced to foster trust and reliability. A systematic evaluation of existing methodologies establishes critical explanation quality metrics, considering factors such as fidelity, completeness, and user satisfaction. The findings reveal key trade-offs between model performance and interpretability, emphasizing the challenges of balancing accuracy with transparency in real-world applications. Additionally, the study explores the ethical and regulatory implications of XAI, proposing standardized protocols for ensuring fairness, accountability, and compliance in AI deployment. By providing a unified theoretical framework and practical recommendations, this research contributes to the advancement of explainability in AI, paving the way for more transparent, interpretable, and human-centric AI systems.

Item Type: Article
Official URL: https://doi.org/10.30574/wjaets.2025.14.3.0106
Uncontrolled Keywords: Explainable Artificial Intelligence (XAI); Model Interpretability; Decision Transparency; Machine Learning; AI Ethics; Human-Ai Interaction; AI Accountability & Trustworthy
Depositing User: Editor Engineering Section
Date Deposited: 27 Jul 2025 15:20
Related URLs:
URI: https://eprint.scholarsrepository.com/id/eprint/2507