Agal, Sanjay and Bhavsar, Nikunj and Raulji, Krishna M and Macwan, Kiran (2025) Reproducibility crisis in deep learning vulnerability detection: An open science perspective. International Journal of Science and Research Archive, 15 (1). pp. 602-611. ISSN 2582-8185
![IJSRA-2025-1041.pdf [thumbnail of IJSRA-2025-1041.pdf]](https://eprint.scholarsrepository.com/style/images/fileicons/text.png)
IJSRA-2025-1041.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.
Abstract
This paper digs into the deep-rooted reproducibility mess in deep learning vulnerability detection. It all starts with the fact that studies keep giving off mixed signals—findings just don’t match up as you’d expect. There isn’t just a handful of ”success stories;” we need datasets that capture every angle, including those less-than-perfect moments, along with all the nitty-gritty details of experiments and how results are measured. In most cases, differences in data quality, the way models are put together, and which evaluation methods are used all add to these unpredictable outcomes. It seems that a lack of a one-size-fits-all approach is what’s really throwing a wrench in the works, especially when it comes to healthcare—where spotting vulnerabilities isn’t just academic but vital for patient safety and keeping data secure. Generally speaking, if reproducibility were on firmer ground, diagnostic systems powered by machine learning would earn more trust, leading to smarter, better decisions. By pushing for an open science style that values clarity and the free sharing of methods, this study hopes to spark more real-world collaboration and fresh ideas, paving the way for deep learning to work more reliably in our healthcare systems. Overall, settling on common evaluation practices might just be the key to smoothing out these reproducibility bumps and boosting the overall credibility and usefulness of these tech solutions in critical healthcare settings.
Item Type: | Article |
---|---|
Official URL: | https://doi.org/10.30574/ijsra.2025.15.1.1041 |
Uncontrolled Keywords: | Reproducibility; Deep Learning; Vulnerability Detection; Open Science; AI Transparency; Cybersecurity; Benchmark Datasets; Experimental Standardization; Machine Learning Reliability; Model Validation |
Depositing User: | Editor IJSRA |
Date Deposited: | 22 Jul 2025 15:39 |
Related URLs: | |
URI: | https://eprint.scholarsrepository.com/id/eprint/1459 |