Least-squares policy iteration is a useful reinforcement learning method in robotics due to its computational efficiency. However, it tends to be sensitive to outliers in observed rewards. In this paper, we propose an alternative method that employs the absolute loss for enhancing robustness and reliability. The proposed method is formulated as a linear programming problem which can be solved efficiently by standard optimization software, so the computational advantage is not sacrificed for gaining robustness and reliability. We demonstrate the usefulness of the proposed approach through a simulated robot-control task.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Masashi SUGIYAMA, Hirotaka HACHIYA, Hisashi KASHIMA, Tetsuro MORIMURA, "Least Absolute Policy Iteration--A Robust Approach to Value Function Approximation" in IEICE TRANSACTIONS on Information,
vol. E93-D, no. 9, pp. 2555-2565, September 2010, doi: 10.1587/transinf.E93.D.2555.
Abstract: Least-squares policy iteration is a useful reinforcement learning method in robotics due to its computational efficiency. However, it tends to be sensitive to outliers in observed rewards. In this paper, we propose an alternative method that employs the absolute loss for enhancing robustness and reliability. The proposed method is formulated as a linear programming problem which can be solved efficiently by standard optimization software, so the computational advantage is not sacrificed for gaining robustness and reliability. We demonstrate the usefulness of the proposed approach through a simulated robot-control task.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E93.D.2555/_p
Copy
@ARTICLE{e93-d_9_2555,
author={Masashi SUGIYAMA, Hirotaka HACHIYA, Hisashi KASHIMA, Tetsuro MORIMURA, },
journal={IEICE TRANSACTIONS on Information},
title={Least Absolute Policy Iteration--A Robust Approach to Value Function Approximation},
year={2010},
volume={E93-D},
number={9},
pages={2555-2565},
abstract={Least-squares policy iteration is a useful reinforcement learning method in robotics due to its computational efficiency. However, it tends to be sensitive to outliers in observed rewards. In this paper, we propose an alternative method that employs the absolute loss for enhancing robustness and reliability. The proposed method is formulated as a linear programming problem which can be solved efficiently by standard optimization software, so the computational advantage is not sacrificed for gaining robustness and reliability. We demonstrate the usefulness of the proposed approach through a simulated robot-control task.},
keywords={},
doi={10.1587/transinf.E93.D.2555},
ISSN={1745-1361},
month={September},}
Copy
TY - JOUR
TI - Least Absolute Policy Iteration--A Robust Approach to Value Function Approximation
T2 - IEICE TRANSACTIONS on Information
SP - 2555
EP - 2565
AU - Masashi SUGIYAMA
AU - Hirotaka HACHIYA
AU - Hisashi KASHIMA
AU - Tetsuro MORIMURA
PY - 2010
DO - 10.1587/transinf.E93.D.2555
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E93-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2010
AB - Least-squares policy iteration is a useful reinforcement learning method in robotics due to its computational efficiency. However, it tends to be sensitive to outliers in observed rewards. In this paper, we propose an alternative method that employs the absolute loss for enhancing robustness and reliability. The proposed method is formulated as a linear programming problem which can be solved efficiently by standard optimization software, so the computational advantage is not sacrificed for gaining robustness and reliability. We demonstrate the usefulness of the proposed approach through a simulated robot-control task.
ER -