Shallom, Kigbu and Ikemefuna, Chukwujekwu Damian (2025) Enhancing malware detection using federated learning and explainable AI for privacy-preserving threat intelligence. World Journal of Advanced Research and Reviews, 27 (1). pp. 331-351. ISSN 2581-9615
Abstract
The escalating complexity and frequency of malware attacks pose a significant challenge to conventional cybersecurity frameworks, particularly in scenarios demanding high data privacy and cross-organizational threat intelligence sharing. Traditional centralized machine learning models for malware detection often rely on aggregating data in a central server, thereby increasing the risk of data breaches and limiting the deployment of models in privacy-sensitive environments such as healthcare, finance, and critical infrastructure. To address these limitations, this study explores an integrated approach that combines Federated Learning (FL) with Explainable Artificial Intelligence (XAI) for enhancing malware detection while preserving user privacy and system confidentiality. Federated learning enables the collaborative training of robust malware classifiers across multiple decentralized nodes without sharing raw data, thus maintaining local data sovereignty and complying with data protection regulations. The proposed framework incorporates deep learning architectures such as convolutional neural networks (CNNs) trained in a federated environment using feature vectors extracted from malicious binaries and behavior logs. To ensure transparency and trust in model predictions, explainable AI techniques specifically SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) are integrated, providing actionable insights into the model’s decision-making process. This study also presents a comprehensive evaluation using a benchmark malware dataset distributed across simulated client environments, measuring detection accuracy, communication overhead, privacy leakage, and interpretability performance. Results demonstrate that the FL-XAI approach achieves detection rates comparable to centralized models while ensuring data confidentiality and interpretability. The research contributes to the evolving field of privacy-preserving threat intelligence by offering a scalable and explainable framework suitable for real-time cybersecurity applications.
Item Type: | Article |
---|---|
Official URL: | https://doi.org/10.30574/wjarr.2025.27.1.2541 |
Uncontrolled Keywords: | Federated Learning; Explainable AI; Malware Detection; Privacy Preservation; Threat Intelligence; Model Interpretability |
Date Deposited: | 01 Sep 2025 13:40 |
Related URLs: | |
URI: | https://eprint.scholarsrepository.com/id/eprint/4853 |