Abstract:
Machine scheduling is a complex field that demands efficient solutions for optimizing resource allocation, time management, and overall performance. Reinforcement Learning (RL) has emerged as a promising paradigm for addressing these challenges. Unlike traditional machine learning approaches that rely on labeled data or predefined patterns, RL enables agents to learn from experience by interacting with their environment, making optimal decisions to maximize rewards.
This thesis explores the application of RL to the realm of machine scheduling. It delves into various categories of RL algorithms, including model-free value-iteration, model-free policy-iteration, model-based, and integrated model-free & model-based methods. Each category is examined in detail, highlighting the unique characteristics of commonly used algorithms.
Through a comprehensive review of existing literature, this study assesses the effectiveness and limitations of RL algorithms in the context of machine scheduling. It identifies key factors that guide the selection of the most suitable RL method for specific scheduling challenges. The analysis encompasses both quantitative aspects, such as solution efficiency, and qualitative aspects, such as scalability and model robustness.
Furthermore, this thesis evaluates the outcomes of previous research endeavors that have applied diverse RL algorithms to task scheduling. It considers not only the quantitative efficiency of solutions but also the qualitative aspects that determine the adaptability and reliability of proposed models.
In summary, this introductory chapter lays the foundation for exploring the application of RL algorithms to address task scheduling problems. The subsequent analysis of various RL categories provides a comprehensive overview of available methodologies, aiding in the selection of the most promising approaches for further research in the field of machine scheduling.