Understanding Overfitting in AI and its impact on Cybersecurity

Russell, Brad (2025) Understanding Overfitting in AI and its impact on Cybersecurity. World Journal of Advanced Research and Reviews, 27 (2). pp. 361-372. ISSN 2581-9615

Abstract

Artificial intelligence is becoming an essential part of cybersecurity, but its advantages are not reaching everyone equally. This paper investigates the problem of overfitting in AI security systems, which happens when models learn too much from past data and fail to recognize new or evolving threats. Using a comparative case study approach, we analyze events like the 2017 WannaCry ransomware outbreak and examine how both large technology firms and public sector organizations respond to these challenges. Our findings show that overfitting is not just a technical flaw but is shaped by decisions about resources, maintenance, and access to expertise. Organizations with more funding and technical capacity are able to keep their AI models current and effective, while smaller and less resourced groups often rely on outdated systems that leave them exposed to attacks. This pattern raises concerns about growing inequality in digital security. The study concludes that addressing overfitting requires not only better technical solutions but also policy changes and industry standards that support fairness, transparency, and adaptability. By making advanced cybersecurity tools more accessible and focusing on ongoing improvement, we can help ensure that digital protection is available to all organizations, not just the privileged few.

Item Type: Article
Official URL: https://doi.org/10.30574/wjarr.2025.27.2.2859
Uncontrolled Keywords: Artificial Intelligence; Cybersecurity; Digital Divide; Machine Learning; Overfitting; Security Vulnerabilities
Date Deposited: 15 Sep 2025 05:52
Related URLs:
URI: https://eprint.scholarsrepository.com/id/eprint/6105