Queuing-Integrated Deep Reinforcement Learning For Adaptive Task Scheduling In Cloud Data Centers
Abstract
The accelerating digitalization of economic, industrial, and social systems has rendered cloud computing the backbone of contemporary information infrastructure. Yet, the unprecedented growth in computational demand, heterogeneity of workloads, and volatility of user requirements have exposed deep limitations in classical task scheduling and resource management paradigms. Static or heuristics-based schedulers, which historically dominated cloud environments, are increasingly unable to cope with highly dynamic and stochastic workloads, fluctuating service-level requirements, and the imperative for energy-efficient operations. This study advances a comprehensive theoretical and analytical investigation of deep reinforcement learning–driven dynamic task scheduling in cloud computing, with particular emphasis on queuing-aware optimal decision making. Building on the methodological foundation established by Kanikanti et al. (2025), who demonstrated the effectiveness of deep Q-learning combined with optimal queuing theory for cloud task scheduling, this research situates their contribution within a broader interdisciplinary framework that spans energy-aware systems, multi-agent learning, and cyber-physical digital twins.
The article develops a unifying perspective that integrates insights from reinforcement learning theory, stochastic queuing models, energy management in cyber-physical systems, and adaptive control of autonomous agents. By synthesizing developments in microgrid energy management, underwater robotics, autonomous vehicle control, and digital twin–based production systems, the study demonstrates that the core challenge of cloud scheduling is not merely computational efficiency but the orchestration of learning-driven decisions across uncertain, delayed, and resource-constrained environments. In this sense, cloud data centers resemble complex adaptive systems in which computing tasks compete for shared resources in a manner analogous to energy flows in microgrids or coordinated actions in multi-robot systems.
Methodologically, the research adopts a text-based analytical design that combines formal reinforcement learning principles derived from Markov decision processes with queuing-theoretic interpretations of cloud workloads. The deep Q-learning framework of Kanikanti et al. (2025) is critically analyzed and extended conceptually through comparative evaluation against SARSA-based, actor–critic, and deep deterministic policy gradient approaches reported in the broader literature. Particular attention is devoted to how state abstraction, reward shaping, and queue length feedback enable schedulers to balance latency, throughput, and energy consumption simultaneously.
The results of this study are presented in a descriptive and interpretive manner grounded in the comparative literature. They indicate that deep Q-learning–based dynamic schedulers consistently outperform rule-based and shallow reinforcement learning approaches in terms of adaptive responsiveness, queue stability, and energy-aware decision making, as supported by studies in cloud computing, microgrids, and robotic coordination. The discussion further reveals that queuing-informed deep reinforcement learning architectures provide a theoretically robust mechanism for mitigating congestion collapse, improving quality of service, and aligning cloud operations with sustainability goals.
By offering an extensive theoretical elaboration and critical synthesis of existing research, this article contributes a unified conceptual framework for understanding and advancing learning-driven cloud task scheduling. It concludes that the convergence of deep reinforcement learning and optimal queuing theory, as exemplified by Kanikanti et al. (2025), represents not a marginal technical improvement but a paradigm shift in how future cloud ecosystems will be designed, governed, and optimized.
Keywords
References
Similar Articles
- Dr. Julian Thorne, Advanced Taxonomic Characterization and Algorithmic Optimization of Distributed Stream Processing Workloads: A Multi-Dimensional Analysis of Hybrid Cloud Resource Orchestration , International Journal of Next-Generation Engineering and Technology: Vol. 3 No. 01 (2026): Volume 03 Issue 01
- Dr. Adrian K. Morales, Securing Multi-Tenant FPGA Accelerators for Cloud Cryptography: Architectures, Threat Models, and Practical Countermeasures , International Journal of Next-Generation Engineering and Technology: Vol. 2 No. 09 (2025): Volume 02 Issue 09
- Linh Thuy Nguyen, Kofi Mensah, OPTIMIZING SOFTWARE EFFORT ESTIMATION: A SYNERGISTIC HYBRID DEEP LEARNING FRAMEWORK WITH ENHANCED METAHEURISTIC OPTIMIZATION , International Journal of Next-Generation Engineering and Technology: Vol. 2 No. 11 (2025): Volume 02 Issue 11
- Dr. Elena Markovic, Adaptive Latency-Aware Microservice Orchestration and Anomaly-Resilient Edge–Cloud Architectures for Mixed Reality and Time-Critical Applications , International Journal of Next-Generation Engineering and Technology: Vol. 1 No. 01 (2024): Volume 01 Issue 01
- Dr. Alejandro Cortés-Mendoza, Cloud Computing As A Socio-Technical And Environmental Infrastructure: Integrating Security, Sustainability, And Strategic Governance In The Post-Traditional Hosting Era , International Journal of Next-Generation Engineering and Technology: Vol. 2 No. 12 (25): Volume 02 Issue 12
- Dr. Arjun V. Menon, Resilient Sustainability and Cloud Platform Strategies: Integrating Life-Cycle, Security, and Operational Excellence in Modern Technology Enterprises , International Journal of Next-Generation Engineering and Technology: Vol. 2 No. 11 (2025): Volume 02 Issue 11
- John M. Aldridge, Secure, Privacy-Preserving FPGA-Enabled Architectures for Big Data and Cloud Services: Theory, Methods, and Integrated Design Principles , International Journal of Next-Generation Engineering and Technology: Vol. 2 No. 11 (2025): Volume 02 Issue 11
- Dr. Elena M. Carter, Securing Multi-Tenant Cloud Environments: Architectural, Operational, and Defensive Strategies Integrating Containerization, Virtualization, and Intrusion Controls , International Journal of Next-Generation Engineering and Technology: Vol. 2 No. 10 (2025): Volume 02 Issue 10
- Dr. Eleanor Whitmore, Cloud-Native Smart Health Platforms: Scalable Machine Learning Deployment for Cardiovascular Prediction through Heroku, Salesforce, and Urban Data Ecosystems , International Journal of Next-Generation Engineering and Technology: Vol. 3 No. 01 (2026): Volume 03 Issue 01
- Sanjay K. Morello, Securing Multi-Tenant FPGA Clouds: Architectures, Threats, and Integrated Defenses for Trusted Reconfigurable Computing , International Journal of Next-Generation Engineering and Technology: Vol. 2 No. 08 (2025): Volume 02 Issue 08
You may also start an advanced similarity search for this article.