Full Text Views
45
Aiming at the problems of traditional algorithms that require high prior knowledge and weak timeliness, this paper proposes an emergency communication network topology planning method based on deep reinforcement learning. Based on the characteristics of the emergency communication network, and drawing on chess, we map the node layout and topology planning problems in the network planning to chess game problems; The two factors of network coverage and connectivity are considered to construct the evaluation criteria for network planning; The method of combining Monte Carlo tree search and self-game is used to realize network planning sample data generation, and the network planning strategy network and value network structure based on residual network are designed. On this basis, the model was constructed and trained based on Tensorflow library. Simulation results show that the proposed planning method can effectively implement intelligent planning of network topology, and has excellent timeliness and feasibility.
Changsheng YIN
National University of Defense Technology
Ruopeng YANG
National University of Defense Technology
Wei ZHU
National University of Defense Technology
Xiaofei ZOU
National University of Defense Technology
Junda ZHANG
Naval Aviation University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Changsheng YIN, Ruopeng YANG, Wei ZHU, Xiaofei ZOU, Junda ZHANG, "Optimal Planning of Emergency Communication Network Using Deep Reinforcement Learning" in IEICE TRANSACTIONS on Communications,
vol. E104-B, no. 1, pp. 20-26, January 2021, doi: 10.1587/transcom.2020EBP3061.
Abstract: Aiming at the problems of traditional algorithms that require high prior knowledge and weak timeliness, this paper proposes an emergency communication network topology planning method based on deep reinforcement learning. Based on the characteristics of the emergency communication network, and drawing on chess, we map the node layout and topology planning problems in the network planning to chess game problems; The two factors of network coverage and connectivity are considered to construct the evaluation criteria for network planning; The method of combining Monte Carlo tree search and self-game is used to realize network planning sample data generation, and the network planning strategy network and value network structure based on residual network are designed. On this basis, the model was constructed and trained based on Tensorflow library. Simulation results show that the proposed planning method can effectively implement intelligent planning of network topology, and has excellent timeliness and feasibility.
URL: https://global.ieice.org/en_transactions/communications/10.1587/transcom.2020EBP3061/_p
Copy
@ARTICLE{e104-b_1_20,
author={Changsheng YIN, Ruopeng YANG, Wei ZHU, Xiaofei ZOU, Junda ZHANG, },
journal={IEICE TRANSACTIONS on Communications},
title={Optimal Planning of Emergency Communication Network Using Deep Reinforcement Learning},
year={2021},
volume={E104-B},
number={1},
pages={20-26},
abstract={Aiming at the problems of traditional algorithms that require high prior knowledge and weak timeliness, this paper proposes an emergency communication network topology planning method based on deep reinforcement learning. Based on the characteristics of the emergency communication network, and drawing on chess, we map the node layout and topology planning problems in the network planning to chess game problems; The two factors of network coverage and connectivity are considered to construct the evaluation criteria for network planning; The method of combining Monte Carlo tree search and self-game is used to realize network planning sample data generation, and the network planning strategy network and value network structure based on residual network are designed. On this basis, the model was constructed and trained based on Tensorflow library. Simulation results show that the proposed planning method can effectively implement intelligent planning of network topology, and has excellent timeliness and feasibility.},
keywords={},
doi={10.1587/transcom.2020EBP3061},
ISSN={1745-1345},
month={January},}
Copy
TY - JOUR
TI - Optimal Planning of Emergency Communication Network Using Deep Reinforcement Learning
T2 - IEICE TRANSACTIONS on Communications
SP - 20
EP - 26
AU - Changsheng YIN
AU - Ruopeng YANG
AU - Wei ZHU
AU - Xiaofei ZOU
AU - Junda ZHANG
PY - 2021
DO - 10.1587/transcom.2020EBP3061
JO - IEICE TRANSACTIONS on Communications
SN - 1745-1345
VL - E104-B
IS - 1
JA - IEICE TRANSACTIONS on Communications
Y1 - January 2021
AB - Aiming at the problems of traditional algorithms that require high prior knowledge and weak timeliness, this paper proposes an emergency communication network topology planning method based on deep reinforcement learning. Based on the characteristics of the emergency communication network, and drawing on chess, we map the node layout and topology planning problems in the network planning to chess game problems; The two factors of network coverage and connectivity are considered to construct the evaluation criteria for network planning; The method of combining Monte Carlo tree search and self-game is used to realize network planning sample data generation, and the network planning strategy network and value network structure based on residual network are designed. On this basis, the model was constructed and trained based on Tensorflow library. Simulation results show that the proposed planning method can effectively implement intelligent planning of network topology, and has excellent timeliness and feasibility.
ER -