The search functionality is under construction.

IEICE TRANSACTIONS on Information

CHQ: A Multi-Agent Reinforcement Learning Scheme for Partially Observable Markov Decision Processes

Hiroshi OSADA, Satoshi FUJITA

  • Full Text Views

    0

  • Cite this

Summary :

In this paper, we propose a new reinforcement learning scheme called CHQ that could efficiently acquire appropriate policies under partially observable Markov decision processes (POMDP) involving probabilistic state transitions, that frequently occurs in multi-agent systems in which each agent independently takes a probabilistic action based on a partial observation of the underlying environment. A key idea of CHQ is to extend the HQ-learning proposed by Wiering et al. in such a way that it could learn the activation order of the MDP subtasks as well as an appropriate policy under each MDP subtask. The goodness of the proposed scheme is experimentally evaluated. The result of experiments implies that it can acquire a deterministic policy with a sufficiently high success rate, even if the given task is POMDP with probabilistic state transitions.

Publication
IEICE TRANSACTIONS on Information Vol.E88-D No.5 pp.1004-1011
Publication Date
2005/05/01
Publicized
Online ISSN
DOI
10.1093/ietisy/e88-d.5.1004
Type of Manuscript
PAPER
Category
Artificial Intelligence and Cognitive Science

Authors

Keyword