This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.
Zixiao ZHANG
Kyoto University
Fujun HE
Kyoto University
Eiji OKI
Kyoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Zixiao ZHANG, Fujun HE, Eiji OKI, "Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach" in IEICE TRANSACTIONS on Communications,
vol. E106-B, no. 7, pp. 557-570, July 2023, doi: 10.1587/transcom.2022EBP3160.
Abstract: This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.
URL: https://global.ieice.org/en_transactions/communications/10.1587/transcom.2022EBP3160/_p
Copy
@ARTICLE{e106-b_7_557,
author={Zixiao ZHANG, Fujun HE, Eiji OKI, },
journal={IEICE TRANSACTIONS on Communications},
title={Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach},
year={2023},
volume={E106-B},
number={7},
pages={557-570},
abstract={This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.},
keywords={},
doi={10.1587/transcom.2022EBP3160},
ISSN={1745-1345},
month={July},}
Copy
TY - JOUR
TI - Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach
T2 - IEICE TRANSACTIONS on Communications
SP - 557
EP - 570
AU - Zixiao ZHANG
AU - Fujun HE
AU - Eiji OKI
PY - 2023
DO - 10.1587/transcom.2022EBP3160
JO - IEICE TRANSACTIONS on Communications
SN - 1745-1345
VL - E106-B
IS - 7
JA - IEICE TRANSACTIONS on Communications
Y1 - July 2023
AB - This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.
ER -