This paper proposes a migration model for distributed server allocation. In distributed server allocation, each user is assigned to a server to minimize the communication delay. In the conventional model, a user cannot migrate to another server to avoid instability. We develop a model where each user can migrate to another server while receiving services. We formulate the proposed model as an integer linear programming problem. We prove that the considered problem is NP-complete. We introduce a heuristic algorithm. Numerical result shows that the proposed model reduces the average communication delay by 59% compared to the conventional model at most.
Souhei YANASE
Kyoto University
Fujun HE
Kyoto University
Haruto TAKA
Kyoto University
Akio KAWABATA
NTT Network Service Systems Laboratories
Eiji OKI
Kyoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Souhei YANASE, Fujun HE, Haruto TAKA, Akio KAWABATA, Eiji OKI, "Migration Model for Distributed Server Allocation" in IEICE TRANSACTIONS on Communications,
vol. E106-B, no. 1, pp. 44-56, January 2023, doi: 10.1587/transcom.2022EBP3046.
Abstract: This paper proposes a migration model for distributed server allocation. In distributed server allocation, each user is assigned to a server to minimize the communication delay. In the conventional model, a user cannot migrate to another server to avoid instability. We develop a model where each user can migrate to another server while receiving services. We formulate the proposed model as an integer linear programming problem. We prove that the considered problem is NP-complete. We introduce a heuristic algorithm. Numerical result shows that the proposed model reduces the average communication delay by 59% compared to the conventional model at most.
URL: https://global.ieice.org/en_transactions/communications/10.1587/transcom.2022EBP3046/_p
Copy
@ARTICLE{e106-b_1_44,
author={Souhei YANASE, Fujun HE, Haruto TAKA, Akio KAWABATA, Eiji OKI, },
journal={IEICE TRANSACTIONS on Communications},
title={Migration Model for Distributed Server Allocation},
year={2023},
volume={E106-B},
number={1},
pages={44-56},
abstract={This paper proposes a migration model for distributed server allocation. In distributed server allocation, each user is assigned to a server to minimize the communication delay. In the conventional model, a user cannot migrate to another server to avoid instability. We develop a model where each user can migrate to another server while receiving services. We formulate the proposed model as an integer linear programming problem. We prove that the considered problem is NP-complete. We introduce a heuristic algorithm. Numerical result shows that the proposed model reduces the average communication delay by 59% compared to the conventional model at most.},
keywords={},
doi={10.1587/transcom.2022EBP3046},
ISSN={1745-1345},
month={January},}
Copy
TY - JOUR
TI - Migration Model for Distributed Server Allocation
T2 - IEICE TRANSACTIONS on Communications
SP - 44
EP - 56
AU - Souhei YANASE
AU - Fujun HE
AU - Haruto TAKA
AU - Akio KAWABATA
AU - Eiji OKI
PY - 2023
DO - 10.1587/transcom.2022EBP3046
JO - IEICE TRANSACTIONS on Communications
SN - 1745-1345
VL - E106-B
IS - 1
JA - IEICE TRANSACTIONS on Communications
Y1 - January 2023
AB - This paper proposes a migration model for distributed server allocation. In distributed server allocation, each user is assigned to a server to minimize the communication delay. In the conventional model, a user cannot migrate to another server to avoid instability. We develop a model where each user can migrate to another server while receiving services. We formulate the proposed model as an integer linear programming problem. We prove that the considered problem is NP-complete. We introduce a heuristic algorithm. Numerical result shows that the proposed model reduces the average communication delay by 59% compared to the conventional model at most.
ER -