Aiming at the contradiction between exploration and exploitation in deep reinforcement learning, this paper proposes “reward-based exploration strategy combined with Softmax action selection” (RBE-Softmax) as a dynamic exploration strategy to guide the agent to learn. The superiority of the proposed method is that the characteristic of agent's learning process is utilized to adapt exploration parameters online, and the agent is able to select potential optimal action more effectively. The proposed method is evaluated in discrete and continuous control tasks on OpenAI Gym, and the empirical evaluation results show that RBE-Softmax method leads to statistically-significant improvement in the performance of deep reinforcement learning algorithms.
Zhi-xiong XU
Army Engineering University
Lei CAO
Army Engineering University
Xi-liang CHEN
Army Engineering University
Chen-xi LI
Army Engineering University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Zhi-xiong XU, Lei CAO, Xi-liang CHEN, Chen-xi LI, "Reward-Based Exploration: Adaptive Control for Deep Reinforcement Learning" in IEICE TRANSACTIONS on Information,
vol. E101-D, no. 9, pp. 2409-2412, September 2018, doi: 10.1587/transinf.2018EDL8011.
Abstract: Aiming at the contradiction between exploration and exploitation in deep reinforcement learning, this paper proposes “reward-based exploration strategy combined with Softmax action selection” (RBE-Softmax) as a dynamic exploration strategy to guide the agent to learn. The superiority of the proposed method is that the characteristic of agent's learning process is utilized to adapt exploration parameters online, and the agent is able to select potential optimal action more effectively. The proposed method is evaluated in discrete and continuous control tasks on OpenAI Gym, and the empirical evaluation results show that RBE-Softmax method leads to statistically-significant improvement in the performance of deep reinforcement learning algorithms.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2018EDL8011/_p
Copy
@ARTICLE{e101-d_9_2409,
author={Zhi-xiong XU, Lei CAO, Xi-liang CHEN, Chen-xi LI, },
journal={IEICE TRANSACTIONS on Information},
title={Reward-Based Exploration: Adaptive Control for Deep Reinforcement Learning},
year={2018},
volume={E101-D},
number={9},
pages={2409-2412},
abstract={Aiming at the contradiction between exploration and exploitation in deep reinforcement learning, this paper proposes “reward-based exploration strategy combined with Softmax action selection” (RBE-Softmax) as a dynamic exploration strategy to guide the agent to learn. The superiority of the proposed method is that the characteristic of agent's learning process is utilized to adapt exploration parameters online, and the agent is able to select potential optimal action more effectively. The proposed method is evaluated in discrete and continuous control tasks on OpenAI Gym, and the empirical evaluation results show that RBE-Softmax method leads to statistically-significant improvement in the performance of deep reinforcement learning algorithms.},
keywords={},
doi={10.1587/transinf.2018EDL8011},
ISSN={1745-1361},
month={September},}
Copy
TY - JOUR
TI - Reward-Based Exploration: Adaptive Control for Deep Reinforcement Learning
T2 - IEICE TRANSACTIONS on Information
SP - 2409
EP - 2412
AU - Zhi-xiong XU
AU - Lei CAO
AU - Xi-liang CHEN
AU - Chen-xi LI
PY - 2018
DO - 10.1587/transinf.2018EDL8011
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E101-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2018
AB - Aiming at the contradiction between exploration and exploitation in deep reinforcement learning, this paper proposes “reward-based exploration strategy combined with Softmax action selection” (RBE-Softmax) as a dynamic exploration strategy to guide the agent to learn. The superiority of the proposed method is that the characteristic of agent's learning process is utilized to adapt exploration parameters online, and the agent is able to select potential optimal action more effectively. The proposed method is evaluated in discrete and continuous control tasks on OpenAI Gym, and the empirical evaluation results show that RBE-Softmax method leads to statistically-significant improvement in the performance of deep reinforcement learning algorithms.
ER -