Gopalan, Ranjith and Onniyil, Dileesh and Viswanathan, Ganesh and Samdani, Gaurav (2025) Hybrid models combining explainable AI and traditional machine learning: A review of methods and applications. World Journal of Advanced Engineering Technology and Sciences, 15 (2). pp. 1388-1402. ISSN 2582-8266
![WJAETS-2025-0635.pdf [thumbnail of WJAETS-2025-0635.pdf]](https://eprint.scholarsrepository.com/style/images/fileicons/text.png)
WJAETS-2025-0635.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.
Abstract
The rapid advancements in artificial intelligence and machine learning have led to the development of highly sophisticated models capable of superhuman performance in a variety of tasks. However, the increasing complexity of these models has also resulted in them becoming "black boxes", where the internal decision-making process is opaque and difficult to interpret. This lack of transparency and explainability has become a significant barrier to the widespread adoption of these models, particularly in sensitive domains such as healthcare and finance. To address this challenge, the field of Explainable AI has emerged, focusing on developing new methods and techniques to improve the interpretability and explainability of machine learning models. This review paper aims to provide a comprehensive overview of the research exploring the combination of Explainable AI and traditional machine learning approaches, known as "hybrid models". This paper discusses the importance of explainability in AI, and the necessity of combining interpretable machine learning models with black-box models to achieve the desired trade-off between accuracy and interpretability. It provides an overview of key methods and applications, integration techniques, implementation frameworks, evaluation metrics, and recent developments in the field of hybrid AI models. The paper also delves into the challenges and limitations in implementing hybrid explainable AI systems, as well as the future trends in the integration of explainable AI and traditional machine learning. Altogether, this paper will serve as a valuable reference for researchers and practitioners working on developing explainable and interpretable AI systems. Keywords: Explainable AI (XAI), Traditional Machine Learning (ML), Hybrid Models, Interpretability, Transparency, Predictive Accuracy, Neural Networks, Ensemble Methods, Decision Trees, Linear Regression, SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), Healthcare Analytics, Financial Risk Management, Autonomous Systems, Predictive Maintenance, Quality Control, Integration Techniques, Evaluation Metrics, Regulatory Compliance, Ethical Considerations, User Trust, Data Quality, Model Complexity, Future Trends, Emerging Technologies, Attention Mechanisms, Transformer Models, Reinforcement Learning, Data Visualization, Interactive Interfaces, Modular Architectures, Ensemble Learning, Post-Hoc Explainability, Intrinsic Explainability, Combined Models
Item Type: | Article |
---|---|
Official URL: | https://doi.org/10.30574/wjaets.2025.15.2.0635 |
Uncontrolled Keywords: | Explainable AI; Machine Learning; SHAP; LIME; Hybrid Models; Interpretability |
Depositing User: | Editor Engineering Section |
Date Deposited: | 04 Aug 2025 16:31 |
Related URLs: | |
URI: | https://eprint.scholarsrepository.com/id/eprint/3789 |