Adeosun, Omoshalewa Anike and Akingbulere, Gbenga and Okika, Nonso and Umoh, Blessing Unwana and Adesola, Adeyemi A. and Ogweda, Haruna (2025) Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models. Global Journal of Engineering and Technology Advances, 22 (2). 090-102. ISSN 2582-5003
![GJETA-2025-0029.pdf [thumbnail of GJETA-2025-0029.pdf]](https://eprint.scholarsrepository.com/style/images/fileicons/text.png)
GJETA-2025-0029.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.
Abstract
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weaknesses in deep learning architectures. This study investigates the vulnerability and robustness of video-based deepfake detection models, specifically comparing a Long Short-Term Convolutional Neural Network (LST-CNN) with adversarial perturbations using the Fast Gradient Sign Method (FGSM) attacks. We evaluate the performance of the models under both clean and adversarial conditions, highlighting the impact of adversarial modifications on detection accuracy. Our results show that adversarial attacks, even with slight perturbations, significantly reduce the accuracy of the models, with the baseline LST-CNN experiencing sharp performance degradation under FGSM attacks. However, models trained with adversarial examples exhibit enhanced resilience, maintaining higher accuracy under attack conditions. The study also evaluates defense strategies, such as adversarial training and input preprocessing, that help improve model robustness. These findings underscore the critical need for robust defense mechanisms to secure deepfake detection models and provide insights into improving model reliability in real-world applications, where adversarial manipulation is a growing concern.
Item Type: | Article |
---|---|
Official URL: | https://doi.org/10.30574/gjeta.2025.22.2.0029 |
Uncontrolled Keywords: | Adversarial Attacks; Deepfake Detection; LST-CNN; FGSM; Video-Based Models |
Depositing User: | Editor Engineering Section |
Date Deposited: | 22 Aug 2025 08:57 |
Related URLs: | |
URI: | https://eprint.scholarsrepository.com/id/eprint/5332 |