Dept. of Robotics Science and Technology,
Chubu University

Deep Learning Reinforcement Learning Journal (EN)

Adaptive Selection of Auxiliary Tasks Using Deep Reinforcement Learning for Video Game Strategy

Author
Hidenori Itaya, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi
Publication
IIEEJ Transaction on Image Electronics and Visual Computing, 2024

Download: PDF (English)

Multitask learning can be utilized to efficiently acquire common factors and useful features among several different tasks. This learning method has been applied in various fields because it can improve the performance of a model by solving related tasks with a single model. One type of multitask learning utilizes auxiliary tasks, which improves the performance of the target task by learning auxiliary tasks simultaneously. In the video game strategy task, unsupervised reinforcement learning and auxiliary learning (UNREAL) has achieved a high performance in a maze game by introducing an auxiliary task. However, in this method, the auxiliary task must be appropriate for the target task, which is very difficult to determine in advance because the most effective auxiliary task will change dynamically in accordance with the learning status of the target task. Therefore, we propose an adaptive selection mechanism called auxiliary selection for auxiliary tasks based on deep reinforcement learning. We applied our method to UNREAL and experimentally confirmed its effectiveness in a variety of video games.

Previous Next