Generative AI for software testing: Harnessing large language models for automated and intelligent quality assurance

Dandotiya, Subham (2025) Generative AI for software testing: Harnessing large language models for automated and intelligent quality assurance. International Journal of Science and Research Archive, 14 (1). pp. 1931-1935. ISSN 2582 8185

[thumbnail of IJSRA-2025-0266.pdf] Text
IJSRA-2025-0266.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download (431kB)

Abstract

Software testing is indispensable for ensuring that modern applications meet rigorous standards of functionality, reliability, and security. However, the complexity and pace of contemporary software development often overwhelm traditional and even AI-based testing approaches, leading to gaps in coverage, delayed feedback, and increased maintenance costs. Recent breakthroughs in Generative AI, particularly Large Language Models (LLMs), offer a new avenue for automating and optimizing testing processes. These models can dynamically generate test cases, predict system vulnerabilities, handle continuous software changes, and reduce the burden on human testers. This paper explores how Generative AI complements and advances established AI-driven testing frameworks, outlines the associated challenges of data preparation and governance, and proposes future directions for fully autonomous, trustworthy testing solutions.

Item Type: Article
Uncontrolled Keywords: Artificial Intelligence; Generative AI; Large Language Models (LLMs); Software Testing; Test Automation; Quality Assurance; DevOps
Subjects: Q Science > Q Science (General)
Q Science > QA Mathematics > QA76 Computer software
Depositing User: Editor IJSRA
Date Deposited: 09 Jul 2025 17:03
Last Modified: 09 Jul 2025 17:03
URI: https://eprint.scholarsrepository.com/id/eprint/251

Actions (login required)

View Item
View Item