Location
Hilton Hawaiian Village, Honolulu, Hawaii
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2024 12:00 AM
End Date
6-1-2024 12:00 AM
Description
Scheduling problems are present in various industrial and service sectors and have a great deal of impact on the performance of these systems. The overwhelming majority of industrial problems exhibit a data-analytic or optimization nature, which can be reduced to known machine learning or optimization problems, respectively. This paper demonstrates the integration of optimization and Deep Reinforcement Learning (DRL) techniques to address scheduling problems. The study explores the potential advantages of Imitation Learning (IL) principles in achieving an optimization and machine learning pipeline for online scheduling. We employ an evolutionary optimization algorithm as an expert policy to generate high-quality solutions for solving scheduling problems. The obtained solutions are passed in the form of experiences to train a DRL-based IL technique. The presented approach is based on adopting the Nondominated Sorting Genetic Algorithm three (NSGA III) and the Monotonic Advantage Re-Weighted Imitation Learning (MARWIL). The presented approach is evaluated using real instances of a Hybrid Flow Shop (HFS) scheduling problem. The experimental analysis demonstrates that the presented DRL-based IL approach learns an appropriate scheduling policy, which is superior to training an agent without previous experiences. Additionally, the derived policy sustains a steady increase in performance when exposing the agent to different unknown problems in contrast to an established baseline from the literature for solving the same problems.
Recommended Citation
Nahhas, Abdulrahman; Kharitonov, Andrey; Haertel, Christian; and Turowski, Klaus, "Imitation Learning Based on Deep Reinforcement Learning for Solving Scheduling Problems" (2024). Hawaii International Conference on System Sciences 2024 (HICSS-57). 2.
https://aisel.aisnet.org/hicss-57/da/digital_twins/2
Imitation Learning Based on Deep Reinforcement Learning for Solving Scheduling Problems
Hilton Hawaiian Village, Honolulu, Hawaii
Scheduling problems are present in various industrial and service sectors and have a great deal of impact on the performance of these systems. The overwhelming majority of industrial problems exhibit a data-analytic or optimization nature, which can be reduced to known machine learning or optimization problems, respectively. This paper demonstrates the integration of optimization and Deep Reinforcement Learning (DRL) techniques to address scheduling problems. The study explores the potential advantages of Imitation Learning (IL) principles in achieving an optimization and machine learning pipeline for online scheduling. We employ an evolutionary optimization algorithm as an expert policy to generate high-quality solutions for solving scheduling problems. The obtained solutions are passed in the form of experiences to train a DRL-based IL technique. The presented approach is based on adopting the Nondominated Sorting Genetic Algorithm three (NSGA III) and the Monotonic Advantage Re-Weighted Imitation Learning (MARWIL). The presented approach is evaluated using real instances of a Hybrid Flow Shop (HFS) scheduling problem. The experimental analysis demonstrates that the presented DRL-based IL approach learns an appropriate scheduling policy, which is superior to training an agent without previous experiences. Additionally, the derived policy sustains a steady increase in performance when exposing the agent to different unknown problems in contrast to an established baseline from the literature for solving the same problems.
https://aisel.aisnet.org/hicss-57/da/digital_twins/2