Location
Hilton Hawaiian Village, Honolulu, Hawaii
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2024 12:00 AM
End Date
6-1-2024 12:00 AM
Description
To generate safe and trustworthy Reinforcement Learning agents, it is fundamental to recognize meaningful states where a particular action should be performed. Thus, it is possible to produce more accurate explanations of the behaviour of the trained agent and simultaneously reduce the risk of committing a fatal error. In this study, we improve existing metrics using Q-values to detect essential states in Reinforcement Learning by introducing a scaled iterated algorithm called IQVA. The key observation of our approach is that a state is important not only if the action has a high impact but also if it often appears in different episodes. We compared our approach with the two baseline measures and a newly introduced value in grid-world environments to demonstrate its efficacy. In this way, we show how the proposed methodology can highlight only the meaningful states for that particular agent instead of emphasizing the importance of states that are rarely visited.
Recommended Citation
Milani, Rudy; Moll, Maximilian; and De Leone, Renato, "Detection of Important States through an Iterative Q-value Algorithm for Explainable Reinforcement Learning" (2024). Hawaii International Conference on System Sciences 2024 (HICSS-57). 2.
https://aisel.aisnet.org/hicss-57/da/supply_chain/2
Detection of Important States through an Iterative Q-value Algorithm for Explainable Reinforcement Learning
Hilton Hawaiian Village, Honolulu, Hawaii
To generate safe and trustworthy Reinforcement Learning agents, it is fundamental to recognize meaningful states where a particular action should be performed. Thus, it is possible to produce more accurate explanations of the behaviour of the trained agent and simultaneously reduce the risk of committing a fatal error. In this study, we improve existing metrics using Q-values to detect essential states in Reinforcement Learning by introducing a scaled iterated algorithm called IQVA. The key observation of our approach is that a state is important not only if the action has a high impact but also if it often appears in different episodes. We compared our approach with the two baseline measures and a newly introduced value in grid-world environments to demonstrate its efficacy. In this way, we show how the proposed methodology can highlight only the meaningful states for that particular agent instead of emphasizing the importance of states that are rarely visited.
https://aisel.aisnet.org/hicss-57/da/supply_chain/2