Adversarial machine learning and securing AI systems

Chawande, Swapnil (2025) Adversarial machine learning and securing AI systems. World Journal of Advanced Engineering Technology and Sciences, 15 (1). pp. 1344-1356. ISSN 2582-8266

[thumbnail of WJAETS-2025-0338.pdf] Article PDF
WJAETS-2025-0338.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download ( 922kB)

Abstract

Artificial intelligence systems face important challenges in adversarial machine learning because smooth yet carefully constructed disturbances to data inputs make models display wrong behavior, resulting in prediction mistakes or system malfunctions. The author of this research paper investigates how adversarial attacks affect AI systems within three primary sectors: autonomous driving, security systems, and healthcare. The paper discusses white-box and black-box adversarial attacks while analyzing machine learning model vulnerabilities. The paper evaluates existing defense methods, including adversarial training and robust optimization, and discusses the difficulties of achieving security without affecting model performance. The existing defense approaches perform poorly against state-of-the-art adversarial techniques, so researchers must develop stronger protection methods. The paper ends by providing security solutions for AI systems through explainable AI integration alongside advanced adversarial training methods so AI models can identify and guard against advancing adversarial threats.

Item Type: Article
Official URL: https://doi.org/10.30574/wjaets.2025.15.1.0338
Uncontrolled Keywords: Adversarial Attacks; Machine Learning; Model Robustness; Defense Mechanisms; AI Security; Deep Learning
Depositing User: Editor Engineering Section
Date Deposited: 04 Aug 2025 16:08
Related URLs:
URI: https://eprint.scholarsrepository.com/id/eprint/2973