Mitigating adversarial threats in deep learning models trained on sensitive imaging and sequencing datasets within hospital infrastructures

Onwubuche, Nnamdi Rex (2025) Mitigating adversarial threats in deep learning models trained on sensitive imaging and sequencing datasets within hospital infrastructures. International Journal of Science and Research Archive, 16 (1). pp. 1146-1167. ISSN 2582-8185

Abstract

As deep learning continues to transform clinical diagnostics, models trained on sensitive imaging and sequencing datasets are increasingly deployed within hospital infrastructures for tasks such as tumor classification, variant calling, and disease risk prediction. While these models offer remarkable accuracy and efficiency, they also present new vulnerabilities to adversarial threats maliciously crafted inputs designed to deceive AI systems without altering visual or genomic content perceptibly. Such attacks can compromise diagnostic reliability, patient safety, and institutional trust, particularly when targeting critical applications involving radiology scans or genetic data. This paper investigates strategies for mitigating adversarial threats in deep learning models operating within hospital ecosystems. We explore how attacks such as Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and adversarial patching exploit model interpretability gaps and high-dimensional data sparsity in medical domains. Emphasis is placed on the unique risks posed to models trained on radiological images (e.g., CT, MRI) and sequencing outputs (e.g., variant allele frequencies, expression matrices) that contain highly sensitive and potentially re-identifiable patient information. We present a multi-tiered defense framework incorporating adversarial training, input preprocessing techniques, certified robustness estimators, and gradient masking to strengthen model resilience. Additionally, we introduce a hospital-specific deployment architecture that includes real-time adversarial input detection using AI-enhanced monitoring agents and edge-layer validation. This design ensures localized protection while minimizing latency in high-throughput clinical workflows. By focusing on healthcare-specific deep learning vulnerabilities and aligning with clinical data governance standards, this research contributes a secure deployment pathway for trustworthy AI applications in precision medicine and hospital cybersecurity.

Item Type: Article
Official URL: https://doi.org/10.30574/ijsra.2025.16.1.2128
Uncontrolled Keywords: Adversarial Attacks; Deep Learning Security; Medical Imaging; Genomic Sequencing; Clinical AI; Hospital Cybersecurity
Date Deposited: 01 Sep 2025 12:24
Related URLs:
URI: https://eprint.scholarsrepository.com/id/eprint/4563