Annam, Deepika (2025) AI-powered real-time data pipeline optimization using deep reinforcement learning. World Journal of Advanced Research and Reviews, 26 (2). pp. 2647-2653. ISSN 2581-9615
![WJARR-2025-1957.pdf [thumbnail of WJARR-2025-1957.pdf]](https://eprint.scholarsrepository.com/style/images/fileicons/text.png)
WJARR-2025-1957.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.
Abstract
Deep Reinforcement Learning (DRL) represents a transformative paradigm for real-time data pipeline optimization across diverse industrial applications. Traditional optimization techniques often yield suboptimal results in dynamic environments with fluctuating workloads, while DRL enables autonomous systems to adapt through experience. This article examines how DRL integrates with distributed stream processing systems to address critical challenges, including workload unpredictability, resource dependencies, and infrastructure heterogeneity. The integration of neural networks with reinforcement learning principles allows for sophisticated decision-making that significantly improves resource utilization and operational efficiency. Various algorithms, including Deep Q-Networks, Proximal Policy Optimization, and Soft Actor-Critic, demonstrate particular efficacy in different application contexts. From healthcare to data centers, robotics to IoT systems, DRL implementation delivers measurable improvements in throughput, latency reduction, and resource optimization. Though implementation challenges exist, including hyperparameter sensitivity and sample efficiency considerations, the potential benefits of DRL-powered optimization for data-intensive industries are substantial, offering a path toward more intelligent, adaptive, and efficient data processing architectures.
Item Type: | Article |
---|---|
Official URL: | https://doi.org/10.30574/wjarr.2025.26.2.1957 |
Uncontrolled Keywords: | Deep Reinforcement Learning; Data Pipeline Optimization; Stream Processing; Resource Management; Adaptive Control |
Depositing User: | Editor WJARR |
Date Deposited: | 20 Aug 2025 11:21 |
Related URLs: | |
URI: | https://eprint.scholarsrepository.com/id/eprint/3241 |