Pothukuchi, Nikhila (2025) Hardware-aware neural network training: A comprehensive framework for Efficient AI model deployment. World Journal of Advanced Engineering Technology and Sciences, 15 (1). pp. 1831-1838. ISSN 2582-8266
![WJAETS-2025-0344.pdf [thumbnail of WJAETS-2025-0344.pdf]](https://eprint.scholarsrepository.com/style/images/fileicons/text.png)
WJAETS-2025-0344.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.
Abstract
This article presents a comprehensive guide to hardware-aware training techniques for artificial intelligence models, addressing the critical balance between performance optimization and resource efficiency. The discussion encompasses key strategies including quantization methods for precision reduction, systematic network pruning for architecture refinement, sparsity implementation for model optimization, and hardware-specific adaptations. Through detailed exploration of these techniques, the article demonstrates how integrating hardware considerations during the training process leads to substantial improvements in deployment efficiency, energy consumption, and overall model performance. The framework outlined offers practical solutions for organizations seeking to optimize their AI deployments across various platforms, from edge devices to cloud infrastructure, while maintaining competitive accuracy levels.
Item Type: | Article |
---|---|
Official URL: | https://doi.org/10.30574/wjaets.2025.15.1.0344 |
Uncontrolled Keywords: | Hardware-Aware Training; Model Optimization; Neural Network Efficiency; Resource Optimization; Energy-Efficient AI |
Depositing User: | Editor Engineering Section |
Date Deposited: | 04 Aug 2025 16:15 |
Related URLs: | |
URI: | https://eprint.scholarsrepository.com/id/eprint/3116 |