Explainable AI for cyber threat Intelligence: Enhancing analyst trust

Sunkara, Goutham (2025) Explainable AI for cyber threat Intelligence: Enhancing analyst trust. Open Access Research Journal of Science and Technology, 14 (2). 029-040. ISSN 2782-9960

Abstract

With the rise of artificial intelligence (AI) in the ecosystem of contemporary cyber threat intelligence (CTI) platforms, the issue of AI-driven decision interpretability and transparency has become increasingly common. Although there is an increased ability in machine learning models to detect the complex and evolving cyber threats, this usually prevents human trust and restricts perceived actionable insights because they are black boxed and have a challenge of accountability. This paper discusses the use of Explainable Artificial Intelligence (XAI, including SHAP (SHapley Additive exPlanations), LIME (Local interpretable Model-Agnostic Explanations), and attention-based visualizations, in order to achieve better interpretability in CTI systems. We provide a case study illustrating that incorporation of XAI to threat detection pipelines enhances the analysis comprehension by analysts, saves investigation time, and enables informed decision-making within Security Operations Centers (SOCs). The evidence provided by our research indicates that XAI does not only make complex AI models understandable to human analysts yet can create a collaborative and transparent cybersecurity ecosystem. The study offers a viable system to implement XAI-enriched CTI that would enable more responsible and successful AI-empowered cybersecurity.

Item Type: Article
Official URL: https://doi.org/10.53022/oarjst.2025.14.2.0091
Uncontrolled Keywords: Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0
Date Deposited: 01 Sep 2025 14:01
Related URLs:
URI: https://eprint.scholarsrepository.com/id/eprint/5397