In this paper, we propose a new reinforcement learning scheme called CHQ that could efficiently acquire appropriate policies under partially observable Markov decision processes (POMDP) involving probabilistic state transitions, that frequently occurs in multi-agent systems in which each agent independently takes a probabilistic action based on a partial observation of the underlying environment. A key idea of CHQ is to extend the HQ-learning proposed by Wiering et al. in such a way that it could learn the activation order of the MDP subtasks as well as an appropriate policy under each MDP subtask. The goodness of the proposed scheme is experimentally evaluated. The result of experiments implies that it can acquire a deterministic policy with a sufficiently high success rate, even if the given task is POMDP with probabilistic state transitions.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Hiroshi OSADA, Satoshi FUJITA, "CHQ: A Multi-Agent Reinforcement Learning Scheme for Partially Observable Markov Decision Processes" in IEICE TRANSACTIONS on Information,
vol. E88-D, no. 5, pp. 1004-1011, May 2005, doi: 10.1093/ietisy/e88-d.5.1004.
Abstract: In this paper, we propose a new reinforcement learning scheme called CHQ that could efficiently acquire appropriate policies under partially observable Markov decision processes (POMDP) involving probabilistic state transitions, that frequently occurs in multi-agent systems in which each agent independently takes a probabilistic action based on a partial observation of the underlying environment. A key idea of CHQ is to extend the HQ-learning proposed by Wiering et al. in such a way that it could learn the activation order of the MDP subtasks as well as an appropriate policy under each MDP subtask. The goodness of the proposed scheme is experimentally evaluated. The result of experiments implies that it can acquire a deterministic policy with a sufficiently high success rate, even if the given task is POMDP with probabilistic state transitions.
URL: https://global.ieice.org/en_transactions/information/10.1093/ietisy/e88-d.5.1004/_p
Copy
@ARTICLE{e88-d_5_1004,
author={Hiroshi OSADA, Satoshi FUJITA, },
journal={IEICE TRANSACTIONS on Information},
title={CHQ: A Multi-Agent Reinforcement Learning Scheme for Partially Observable Markov Decision Processes},
year={2005},
volume={E88-D},
number={5},
pages={1004-1011},
abstract={In this paper, we propose a new reinforcement learning scheme called CHQ that could efficiently acquire appropriate policies under partially observable Markov decision processes (POMDP) involving probabilistic state transitions, that frequently occurs in multi-agent systems in which each agent independently takes a probabilistic action based on a partial observation of the underlying environment. A key idea of CHQ is to extend the HQ-learning proposed by Wiering et al. in such a way that it could learn the activation order of the MDP subtasks as well as an appropriate policy under each MDP subtask. The goodness of the proposed scheme is experimentally evaluated. The result of experiments implies that it can acquire a deterministic policy with a sufficiently high success rate, even if the given task is POMDP with probabilistic state transitions.},
keywords={},
doi={10.1093/ietisy/e88-d.5.1004},
ISSN={},
month={May},}
Copy
TY - JOUR
TI - CHQ: A Multi-Agent Reinforcement Learning Scheme for Partially Observable Markov Decision Processes
T2 - IEICE TRANSACTIONS on Information
SP - 1004
EP - 1011
AU - Hiroshi OSADA
AU - Satoshi FUJITA
PY - 2005
DO - 10.1093/ietisy/e88-d.5.1004
JO - IEICE TRANSACTIONS on Information
SN -
VL - E88-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2005
AB - In this paper, we propose a new reinforcement learning scheme called CHQ that could efficiently acquire appropriate policies under partially observable Markov decision processes (POMDP) involving probabilistic state transitions, that frequently occurs in multi-agent systems in which each agent independently takes a probabilistic action based on a partial observation of the underlying environment. A key idea of CHQ is to extend the HQ-learning proposed by Wiering et al. in such a way that it could learn the activation order of the MDP subtasks as well as an appropriate policy under each MDP subtask. The goodness of the proposed scheme is experimentally evaluated. The result of experiments implies that it can acquire a deterministic policy with a sufficiently high success rate, even if the given task is POMDP with probabilistic state transitions.
ER -