Efficient task scheduling in multi-cloud environments is crucial for optimizing resource utilization, reducing execution time, minimizing costs, and enhancing overall system performance. Traditional scheduling approaches, including heuristic and rule-based methods, often struggle with dynamic workload fluctuations and resource heterogeneity, leading to inefficiencies. This study proposes a reinforcement learning-based scheduling framework that dynamically adapts to real-time cloud conditions to optimize task allocation. The problem is formulated as a Markov Decision Process (MDP), and a deep reinforcement learning model is trained to learn optimal scheduling policies. Experimental results demonstrate that the proposed approach significantly reduces makespan, improves resource utilization, lowers energy consumption, and decreases operational costs compared to traditional scheduling methods. The model successfully adapts to varying workload intensities and different cloud configurations, proving its scalability and robustness. Comparative analysis with heuristic-based and rule-based scheduling techniques further validates the superiority of reinforcement learning in optimizing multi-cloud task scheduling. Despite initial computational overhead during training, the proposed model offers long-term performance benefits, making it a viable solution for real-world cloud computing applications. The study highlights the potential of reinforcement learning to revolutionize cloud resource management and lays the foundation for future advancements in autonomous, intelligent cloud scheduling frameworks.
Keywords
Reinforcement Learning, Task Scheduling, Multi-Cloud Computing, Resource Optimization, Deep Learning.