1-1hit |
Rousslan F. J. DOSSA Xinyu LIAN Hirokazu NOMOTO Takashi MATSUBARA Kuniaki UEHARA
Reinforcement learning methods achieve performance superior to humans in a wide range of complex tasks and uncertain environments. However, high performance is not the sole metric for practical use such as in a game AI or autonomous driving. A highly efficient agent performs greedily and selfishly, and is thus inconvenient for surrounding users, hence a demand for human-like agents. Imitation learning reproduces the behavior of a human expert and builds a human-like agent. However, its performance is limited to the expert's. In this study, we propose a training scheme to construct a human-like and efficient agent via mixing reinforcement and imitation learning for discrete and continuous action space problems. The proposed hybrid agent achieves a higher performance than a strict imitation learning agent and exhibits more human-like behavior, which is measured via a human sensitivity test.