Adaptive Selection of Auxiliary Tasks in UNREAL
- Author
- Hidenori Itaya, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
- Publication
- International Joint Conference on Artificial Intelligence, 2nd Scaling-Up Reinforcement Learning Workshop, 2019
Download: PDF (English)
Deep reinforcement learning (RL) has a difficulty to train an agent and to achieve higher performance stably because complex problems contain larger state spaces. Unsupervised reinforcement learning and auxiliary learning (UNREAL) has achieve higher performance in complex environments by introducing auxiliary tasks. UNREAL supports the training of the main task by introducing auxiliary tasks in addition to main tasks during the training phase. However, these auxiliary tasks used in UNREAL are not necessarily effective in every problem setting. Although we need to design auxiliary tasks that are effective for a target tasks, design- ing them manually takes a considerable amount of time. In this paper, we propose a novel auxiliary task called “auxiliary selection.” Our auxiliary selection adaptively selects auxiliary tasks in accordance with the task and the environment. Experimental results show that our method can select auxiliary tasks and can train a network efficiently.