The search functionality is under construction.

Author Search Result

[Author] Eiji KAWAI(6hit)

1-6hit
  • Reducing Processor Usage on Heavily-Loaded Network Servers with POSIX Real-Time Scheduling Control

    Eiji KAWAI  Youki KADOBAYASHI  Suguru YAMAGUCHI  

     
    PAPER-System Programs

      Vol:
    E88-D No:6
      Page(s):
    1168-1177

    Polling I/O mechanisms on the Unix platform such as select() and poll() cause high processing overhead when they are used in a heavily-loaded network server with many concurrent open sockets. Large waste of processing power incurs not only service degradation but also various troubles such as high electronic power consumption and worsened MTBF of server hosts. It is thus a serious issue especially in large-scale service providers such as an Internet data center (iDC) where a great number of heavily-loaded network servers are operated. As a solution of this problem, we propose a technique of fine-grained control on the invocation intervals of the polling I/O function. The uniqueness of this study is the utilization of POSIX real-time scheduling to enable the fine-grained execution control. Although earlier solutions such as an explicit event delivery mechanism also addressed the problem, they require major modification in the OS kernel and transition from the traditional polling I/O model to the new explicit event-notification model. On the other hand, our technique can be implemented with low cost because it just inserts a few small blocks of codes into the server program and does not require any modification in the OS kernel.

  • Elastic and Adaptive Resource Orchestration Architecture on 3-Tier Network Virtualization Model

    Masayoshi SHIMAMURA  Hiroaki YAMANAKA  Akira NAGATA  Katsuyoshi IIDA  Eiji KAWAI  Masato TSURU  

     
    PAPER-Information Network

      Pubricized:
    2016/01/18
      Vol:
    E99-D No:4
      Page(s):
    1127-1138

    Network virtualization environments (NVEs) are emerging to meet the increasing diversity of demands by Internet users where a virtual network (VN) can be constructed to accommodate each specific application service. In the future Internet, diverse service providers (SPs) will provide application services on their own VNs running across diverse infrastructure providers (InPs) that provide physical resources in an NVE. To realize both efficient resource utilization and good QoS of each individual service in such environments, SPs should perform adaptive control on network and computational resources in dynamic and competitive resource sharing, instead of explicit and sufficient reservation of physical resources for their VNs. On the other hand, two novel concepts, software-defined networking (SDN) and network function virtualization (NFV), have emerged to facilitate the efficient use of network and computational resources, flexible provisioning, network programmability, unified management, etc., which enable us to implement adaptive resource control. In this paper, therefore, we propose an architectural design of network orchestration for enabling SPs to maintain QoS of their applications aggressively by means of resource control on their VNs efficiently, by introducing virtual network provider (VNP) between InPs and SPs as 3-tier model, and by integrating SDN and NFV functionalities into NVE framework. We define new north-bound interfaces (NBIs) for resource requests, resource upgrades, resource programming, and alert notifications while using the standard OpenFlow interfaces for resource control on users' traffic flows. The feasibility of the proposed architecture is demonstrated through network experiments using a prototype implementation and a sample application service on nation-wide testbed networks, the JGN-X and RISE.

  • RISE: A Wide-Area Hybrid OpenFlow Network Testbed

    Yoshihiko KANAUMI  Shu-ichi SAITO  Eiji KAWAI  Shuji ISHII  Kazumasa KOBAYASHI  Shinji SHIMOJO  

     
    PAPER-Network

      Vol:
    E96-B No:1
      Page(s):
    108-118

    The deployment of hybrid wide-area OpenFlow networks is essential for the gradual integration of OpenFlow technology into existing wide-area networks. Integration is necessary because it is impractical to replace such wide-area networks with OpenFlow-enabled ones at once. On the other hand, the design, deployment, and operation of such hybrid OpenFlow networks are often conducted intuitively without in-depth technical considerations. In this paper, we systematically discuss the technical aspects of the hybrid architecture for OpenFlow networks based on our experience so far in developing wide-area hybrid OpenFlow networks on JGN2plus and JGN-X, which are nation-wide testbed networks in Japan. We also describe the design and operation of RISE (Research Infrastructure for large-Scale network Experiments) on JGN-X, whose objective is to support a variety of OpenFlow network experiments.

  • Service Migration Scheduling with Bandwidth Limitation against Crowd Mobility in Edge Computing Environments

    Hiroaki YAMANAKA  Yuuichi TERANISHI  Eiji KAWAI  

     
    PAPER-Network

      Pubricized:
    2020/09/11
      Vol:
    E104-B No:3
      Page(s):
    240-250

    Edge computing offers computing capability with ultra-low response times by leveraging servers close to end-user devices. Due to the mobility of end-user devices, the latency between the servers and the end-user devices can become long and the response time might become unacceptable for an application service. Service (container) migration that follows the handover of end-user devices retains the response time. Service migration following the mass movement of people in the same geographic area and at the same time due to an event (e.g., commuting) generates heavy bandwidth usage in the mobile backhaul network. Heavy usage by service migration reduces available bandwidth for ordinary application traffic in the network. Shaping the migration traffic limits the bandwidth usage while delaying service migration and increasing the response time of the container for the moving end-user device. Furthermore, targets of migration decisions increase (i.e., the system load) because delaying a migration process accumulates containers waiting for migration. In this paper, we propose a migration scheduling method to control bandwidth usage for migration in a network and ensure timely processing of service migration. Simulations that compare the proposal with state-of-the-art methods show that the proposal always suppresses the bandwidth usage under the predetermined threshold. The method reduced the number of containers exceeding the acceptable response time up to 40% of the compared state-of-the-art methods. Furthermore, the proposed method minimized the targets of migration decisions.

  • Design and Implementation of an Edge Computing Testbed to Simplify Experimental Environment Setup

    Hiroaki YAMANAKA  Yuuichi TERANISHI  Eiji KAWAI  Hidehisa NAGANO  Hiroaki HARAI  

     
    PAPER-Dependable Computing

      Pubricized:
    2022/05/27
      Vol:
    E105-D No:9
      Page(s):
    1516-1528

    Running IoT applications on edge computing infrastructures has the benefits of low response times and efficient bandwidth usage. System verification on a testbed is required to deploy IoT applications in production environments. In a testbed, Docker containers are preferable for a smooth transition of tested application programs to production environments. In addition, the round-trip times (RTT) of Docker containers to clients must be ensured, according to the target application's response time requirements. However, in existing testbed systems, the RTTs between Docker containers and clients are not ensured. Thus, we must undergo a large amount of configuration data including RTTs between all pairs of wireless base station nodes and servers to set up a testbed environment. In this paper, we present an edge computing testbed system with simple application programming interfaces (API) for testbed users that ensures RTTs between Docker containers and clients. The proposed system automatically determines which servers to place Docker containers on according to virtual regions and the RTTs specified by the testbed users through APIs. The virtual regions provide reduced size information about the RTTs in a network. In the proposed system, the configuration data size is reduced to one divided by the number of the servers and the command arguments length is reduced to approximately one-third or less, whereas the increased system running time is 4.3s.

  • Duplicated Hash Routing: A Robust Algorithm for a Distributed WWW Cache System

    Eiji KAWAI  Kadohito OSUGA  Ken-ichi CHINEN  Suguru YAMAGUCHI  

     
    PAPER

      Vol:
    E83-D No:5
      Page(s):
    1039-1047

    Hash routing is an algorithm for a distributed WWW caching system that achieves a high hit rate by preventing overlaps of objects between caches. However, one of the drawbacks of hash routing is its lack of robustness against failure. Because WWW becomes a vital service on the Internet, the capabilities of fault tolerance of systems that provide the WWW service come to be important. In this paper, we propose a duplicated hash routing algorithm, an extension of hash routing. Our algorithm introduces minimum redundancy to keep system performance when some caching nodes are crashed. In addition, we optionally allow each node to cache objects requested by its local clients (local caching), which may waste cache capacity of the system but it can cut down the network traffic between caching nodes. We evaluate various aspects of the system performance such as hit rates, error rates and network traffic by simulations and compare them with those of other algorithms. The results show that our algorithm achieves both high fault tolerance and high performance with low system overhead.