Model-based reinforcement learning uses the gathered information, during each experience, more efficiently than model-free reinforcement learning. This is especially interesting in multiagent systems, since a large number of experiences are necessary to achieve a good performance. In this paper, model-based reinforcement learning is developed for a group of self-interested agents with sequential action selection based on traditional prioritized sweeping. Every single situation of decision making in this learning process, called extensive Markov game, is modeled as n-person general-sum extensive form game with perfect information. A modified version of backward induction is proposed for action selection, which adjusts the tradeoff between selecting subgame perfect equilibrium points, as the optimal joint actions, and learning new joint actions. The algorithm is proved to be convergent and discussed based on the new results on the convergence of the traditional prioritized sweeping.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Ali AKRAMIZADEH, Ahmad AFSHAR, Mohammad Bagher MENHAJ, Samira JAFARI, "Model-Based Reinforcement Learning in Multiagent Systems with Sequential Action Selection" in IEICE TRANSACTIONS on Information,
vol. E94-D, no. 2, pp. 255-263, February 2011, doi: 10.1587/transinf.E94.D.255.
Abstract: Model-based reinforcement learning uses the gathered information, during each experience, more efficiently than model-free reinforcement learning. This is especially interesting in multiagent systems, since a large number of experiences are necessary to achieve a good performance. In this paper, model-based reinforcement learning is developed for a group of self-interested agents with sequential action selection based on traditional prioritized sweeping. Every single situation of decision making in this learning process, called extensive Markov game, is modeled as n-person general-sum extensive form game with perfect information. A modified version of backward induction is proposed for action selection, which adjusts the tradeoff between selecting subgame perfect equilibrium points, as the optimal joint actions, and learning new joint actions. The algorithm is proved to be convergent and discussed based on the new results on the convergence of the traditional prioritized sweeping.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E94.D.255/_p
Copy
@ARTICLE{e94-d_2_255,
author={Ali AKRAMIZADEH, Ahmad AFSHAR, Mohammad Bagher MENHAJ, Samira JAFARI, },
journal={IEICE TRANSACTIONS on Information},
title={Model-Based Reinforcement Learning in Multiagent Systems with Sequential Action Selection},
year={2011},
volume={E94-D},
number={2},
pages={255-263},
abstract={Model-based reinforcement learning uses the gathered information, during each experience, more efficiently than model-free reinforcement learning. This is especially interesting in multiagent systems, since a large number of experiences are necessary to achieve a good performance. In this paper, model-based reinforcement learning is developed for a group of self-interested agents with sequential action selection based on traditional prioritized sweeping. Every single situation of decision making in this learning process, called extensive Markov game, is modeled as n-person general-sum extensive form game with perfect information. A modified version of backward induction is proposed for action selection, which adjusts the tradeoff between selecting subgame perfect equilibrium points, as the optimal joint actions, and learning new joint actions. The algorithm is proved to be convergent and discussed based on the new results on the convergence of the traditional prioritized sweeping.},
keywords={},
doi={10.1587/transinf.E94.D.255},
ISSN={1745-1361},
month={February},}
Copy
TY - JOUR
TI - Model-Based Reinforcement Learning in Multiagent Systems with Sequential Action Selection
T2 - IEICE TRANSACTIONS on Information
SP - 255
EP - 263
AU - Ali AKRAMIZADEH
AU - Ahmad AFSHAR
AU - Mohammad Bagher MENHAJ
AU - Samira JAFARI
PY - 2011
DO - 10.1587/transinf.E94.D.255
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E94-D
IS - 2
JA - IEICE TRANSACTIONS on Information
Y1 - February 2011
AB - Model-based reinforcement learning uses the gathered information, during each experience, more efficiently than model-free reinforcement learning. This is especially interesting in multiagent systems, since a large number of experiences are necessary to achieve a good performance. In this paper, model-based reinforcement learning is developed for a group of self-interested agents with sequential action selection based on traditional prioritized sweeping. Every single situation of decision making in this learning process, called extensive Markov game, is modeled as n-person general-sum extensive form game with perfect information. A modified version of backward induction is proposed for action selection, which adjusts the tradeoff between selecting subgame perfect equilibrium points, as the optimal joint actions, and learning new joint actions. The algorithm is proved to be convergent and discussed based on the new results on the convergence of the traditional prioritized sweeping.
ER -