The search functionality is under construction.

Author Search Result

[Author] Akio KAWABATA(10hit)

1-10hit
  • Heuristic Approach to Distributed Server Allocation with Preventive Start-Time Optimization against Server Failure

    Souhei YANASE  Shuto MASUDA  Fujun HE  Akio KAWABATA  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2021/02/01
      Vol:
    E104-B No:8
      Page(s):
    942-950

    This paper presents a distributed server allocation model with preventive start-time optimization against a single server failure. The presented model preventively determines the assignment of servers to users under each failure pattern to minimize the largest maximum delay among all failure patterns. We formulate the proposed model as an integer linear programming (ILP) problem. We prove the NP-completeness of the considered problem. As the number of users and that of servers increase, the size of ILP problem increases; the computation time to solve the ILP problem becomes excessively large. We develop a heuristic approach that applies simulated annealing and the ILP approach in a hybrid manner to obtain the solution. Numerical results reveal that the developed heuristic approach reduces the computation time by 26% compared to the ILP approach while increasing the largest maximum delay by just 3.4% in average. It reduces the largest maximum delay compared with the start-time optimization model; it avoids the instability caused by the unnecessary disconnection permitted by the run-time optimization model.

  • Migration Model for Distributed Server Allocation

    Souhei YANASE  Fujun HE  Haruto TAKA  Akio KAWABATA  Eiji OKI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2022/07/05
      Vol:
    E106-B No:1
      Page(s):
    44-56

    This paper proposes a migration model for distributed server allocation. In distributed server allocation, each user is assigned to a server to minimize the communication delay. In the conventional model, a user cannot migrate to another server to avoid instability. We develop a model where each user can migrate to another server while receiving services. We formulate the proposed model as an integer linear programming problem. We prove that the considered problem is NP-complete. We introduce a heuristic algorithm. Numerical result shows that the proposed model reduces the average communication delay by 59% compared to the conventional model at most.

  • A Network Design Scheme in Delay Sensitive Monitoring Services Open Access

    Akio KAWABATA  Takuya TOJO  Bijoy CHAND CHATTERJEE  Eiji OKI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2023/04/19
      Vol:
    E106-B No:10
      Page(s):
    903-914

    Mission-critical monitoring services, such as finding criminals with a monitoring camera, require rapid detection of newly updated data, where suppressing delay is desirable. Taking this direction, this paper proposes a network design scheme to minimize this delay for monitoring services that consist of Internet-of-Things (IoT) devices located at terminal endpoints (TEs), databases (DB), and applications (APLs). The proposed scheme determines the allocation of DB and APLs and the selection of the server to which TE belongs. DB and APL are allocated on an optimal server from multiple servers in the network. We formulate the proposed network design scheme as an integer linear programming problem. The delay reduction effect of the proposed scheme is evaluated under two network topologies and a monitoring camera system network. In the two network topologies, the delays of the proposed scheme are 78 and 80 percent, compared to that of the conventional scheme. In the monitoring camera system network, the delay of the proposed scheme is 77 percent compared to that of the conventional scheme. These results indicate that the proposed scheme reduces the delay compared to the conventional scheme where APLs are located near TEs. The computation time of the proposed scheme is acceptable for the design phase before the service is launched. The proposed scheme can contribute to a network design that detects newly added objects quickly in the monitoring services.

  • MHND: Multi-Homing Network Design Model for Delay Sensitive Applications Open Access

    Akio KAWABATA  Bijoy CHAND CHATTERJEE  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/07/24
      Vol:
    E106-B No:11
      Page(s):
    1143-1153

    When mission-critical applications are provided over a network, high availability is required in addition to a low delay. This paper proposes a multi-homing network design model, named MHND, that achieves low delay, high availability, and the order guarantee of events. MHND maintains the event occurrence order with a multi-homing configuration using conservative synchronization. We formulate MHND as an integer linear programming problem to minimize the delay. We prove that the distributed server allocation problem with MHND is NP-complete. Numerical results indicate that, as a multi-homing number, which is the number of servers to which each user belongs, increases, the availability increases while increasing the delay. Noteworthy, two or more multi-homing can achieve approximately an order of magnitude higher availability compared to that of conventional single-homing at the expense of a delay increase up to two times. By using MHND, flexible network design is achieved based on the acceptable delay in service and the required availability.

  • CMND: Consistent-Aware Multi-Server Network Design Model for Delay-Sensitive Applications

    Akio KAWABATA  Bijoy CHAND CHATTERJEE  Eiji OKI  

     
    PAPER-Network System

      Vol:
    E107-B No:3
      Page(s):
    321-329

    This paper proposes a network design model, considering data consistency for a delay-sensitive distributed processing system. The data consistency is determined by collating the own state and the states of slave servers. If the state is mismatched with other servers, the rollback process is initiated to modify the state to guarantee data consistency. In the proposed model, the selected servers and the master-slave server pairs are determined to minimize the end-to-end delay and the delay for data consistency. We formulate the proposed model as an integer linear programming problem. We evaluate the delay performance and computation time. We evaluate the proposed model in two network models with two, three, and four slave servers. The proposed model reduces the delay for data consistency by up to 31 percent compared to that of a typical model that collates the status of all servers at one master server. The computation time is a few seconds, which is an acceptable time for network design before service launch. These results indicate that the proposed model is effective for delay-sensitive applications.

  • Algorithms for Distributed Server Allocation Problem

    Takaaki SAWA  Fujun HE  Akio KAWABATA  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2020/05/08
      Vol:
    E103-B No:11
      Page(s):
    1341-1352

    This paper proposes two algorithms, namely Server-User Matching (SUM) algorithm and Extended Server-User Matching (ESUM) algorithm, for the distributed server allocation problem. The server allocation problem is to determine the matching between servers and users to minimize the maximum delay, which is the maximum time to complete user synchronization. We analyze the computational time complexity. We prove that the SUM algorithm obtains the optimal solutions in polynomial time for the special case that all server-server delay values are the same and constant. We provide the upper and lower bounds when the SUM algorithm is applied to the general server allocation problem. We show that the ESUM algorithm is a fixed-parameter tractable algorithm that can attain the optimal solution for the server allocation problem parameterized by the number of servers. Numerical results show that the computation time of ESUM follows the analyzed complexity while the ESUM algorithm outperforms the approach of integer linear programming solved by our examined solver.

  • An Optimistic Synchronization Based Optimal Server Selection Scheme for Delay Sensitive Communication Services Open Access

    Akio KAWABATA  Bijoy Chand CHATTERJEE  Eiji OKI  

     
    PAPER-Network System

      Pubricized:
    2021/04/09
      Vol:
    E104-B No:10
      Page(s):
    1277-1287

    In distributed processing for communication services, a proper server selection scheme is required to reduce delay by ensuring the event occurrence order. Although a conservative synchronization algorithm (CSA) has been used to achieve this goal, an optimistic synchronization algorithm (OSA) can be feasible for synchronizing distributed systems. In comparison with CSA, which reproduces events in occurrence order before processing applications, OSA can be feasible to realize low delay communication as the processing events arrive sequentially. This paper proposes an optimal server selection scheme that uses OSA for distributed processing systems to minimize end-to-end delay under the condition that maximum status holding time is limited. In other words, the end-to-end delay is minimized based on the allowed rollback time, which is given according to the application designing aspects and availability of computing resources. Numerical results indicate that the proposed scheme reduces the delay compared to the conventional scheme.

  • Fabrication of YBa2Cu3O7x-PrBa2Cu3O7y Hetero-Structure by Using a Hollow Cathode Discharge Sputtering System

    Akio KAWABATA  Tadayuki KOBAYASHI  Kouichi USAMI  Toshinari GOTO  

     
    PAPER

      Vol:
    E76-C No:8
      Page(s):
    1236-1240

    A sputtering system using dc hollow cathode discharge was developed for the propose of high Tc superconducting devices. Using this system, as-grown superconducting thin films of YBCO have been formed on MgO and SrTiO3 substrates. Influence of the sputtering conditions such as the substrate temperature and discharge gas pressure on the Tc and lattice parameter was investigated. It was found that superconducting films on MgO with Tczero higher than 87 K ere routinely obtained at the pressure of 820 mTorr (5%O2) and substrate temperature of 700 during deposition. The a/b-axis and c-axis oriented YBCO-PBCO hetero-structures were also successfully formed on MgO and SrTiO3 substrates.

  • Packet Processing Architecture with Off-Chip Last Level Cache Using Interleaved 3D-Stacked DRAM Open Access

    Tomohiro KORIKAWA  Akio KAWABATA  Fujun HE  Eiji OKI  

     
    PAPER-Network System

      Pubricized:
    2020/08/06
      Vol:
    E104-B No:2
      Page(s):
    149-157

    The performance of packet processing applications is dependent on the memory access speed of network systems. Table lookup requires fast memory access and is one of the most common processes in various packet processing applications, which can be a dominant performance bottleneck. Therefore, in Network Function Virtualization (NFV)-aware environments, on-chip fast cache memories of a CPU of general-purpose hardware become critical to achieve high performance packet processing speeds of over tens of Gbps. Also, multiple types of applications and complex applications are executed in the same system simultaneously in carrier network systems, which require adequate cache memory capacities as well. In this paper, we propose a packet processing architecture that utilizes interleaved 3 Dimensional (3D)-stacked Dynamic Random Access Memory (DRAM) devices as off-chip Last Level Cache (LLC) in addition to several levels of dedicated cache memories of each CPU core. Entries of a lookup table are distributed in every bank and vault to utilize both bank interleaving and vault-level memory parallelism. Frequently accessed entries in 3D-stacked DRAM are also cached in on-chip dedicated cache memories of each CPU core. The evaluation results show that the proposed architecture reduces the memory access latency by 57%, and increases the throughput by 100% while reducing the blocking probability but about 10% compared to the architecture with shared on-chip LLC. These results indicate that 3D-stacked DRAM can be practical as off-chip LLC in parallel packet processing systems.

  • Participating-Domain Segmentation Based Server Selection Scheme for Real-Time Interactive Communication Open Access

    Akio KAWABATA  Bijoy CHAND CHATTERJEE  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2020/01/17
      Vol:
    E103-B No:7
      Page(s):
    736-747

    This paper proposes an efficient server selection scheme in successive participation scenario with participating-domain segmentation. The scheme is utilized by distributed processing systems for real-time interactive communication to suppress the communication latency of a wide-area network. In the proposed scheme, users participate for server selection one after another. The proposed scheme determines a recommended server, and a new user selects the recommended server first. Before each user participates, the recommended servers are determined assuming that users exist in the considered regions. A recommended server is determined for each divided region to minimize the latency. The new user selects the recommended available server, where the user is located. We formulate an integer linear programming problem to determine the recommended servers. Numerical results indicate that, at the cost additional computation, the proposed scheme offers smaller latency than the conventional scheme. We investigate different policies to divide the users' participation for the recommended server finding process in the proposed scheme.