Fine-tuning AI Models for code generation: Advances and applications

Sonkar, Siddhant (2025) Fine-tuning AI Models for code generation: Advances and applications. World Journal of Advanced Research and Reviews, 26 (1). pp. 1353-1359. ISSN 2581-9615

[thumbnail of WJARR-2025-1172.pdf] Article PDF
WJARR-2025-1172.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download ( 404kB)

Abstract

Fine-tuning pre-trained language models for code generation represents a significant advancement in bridging artificial intelligence and software development. This process adapts foundation models trained on vast code repositories to specific programming languages, frameworks, and domains. The article examines the complete pipeline for effective fine-tuning, beginning with selecting appropriate base architectures such as Code Llama, StarCoder, and Codex, which are specifically designed for code understanding. A critical exploration of dataset preparation techniques highlights the importance of curated, diverse examples that represent target domains accurately while avoiding biases. The article further delves into parameter-efficient adaptation techniques like Low-Rank Adaptation, adapter modules, and prompt tuning, dramatically reducing computational requirements while preserving performance. These innovations democratize access to specialized code-generation capabilities, making them available even with limited resources. Applications span intelligent code completion, natural language to code translation, refactoring, cross-language conversion, and test generation, transforming developer workflows across experience levels. The article provides comprehensive insights into how fine-tuned models reshape software development practices by examining the interplay between model architecture, data quality, fine-tuning techniques, and practical applications.

Item Type: Article
Official URL: https://doi.org/10.30574/wjarr.2025.26.1.1172
Uncontrolled Keywords: Code Generation; Fine-Tuning, Parameter Efficiency; Knowledge Distillation; Security-Aware Programming; Adaptive Learning
Depositing User: Editor WJARR
Date Deposited: 22 Jul 2025 23:55
Related URLs:
URI: https://eprint.scholarsrepository.com/id/eprint/1799