The search functionality is under construction.

Author Search Result

[Author] Eiji OKI(86hit)

41-60hit(86hit)

  • Delay Distribution Based Remote Data Fetch Scheme for Hadoop Clusters in Public Cloud

    Ravindra Sandaruwan RANAWEERA  Eiji OKI  Nattapong KITSUWAN  

     
    PAPER-Network

      Pubricized:
    2019/02/04
      Vol:
    E102-B No:8
      Page(s):
    1617-1625

    Apache Hadoop and its ecosystem have become the de facto platform for processing large-scale data, or Big Data, because it hides the complexity of distributed computing, scheduling, and communication while providing fault-tolerance. Cloud-based environments are becoming a popular platform for hosting Hadoop clusters due to their low initial cost and limitless capacity. However, cloud-based Hadoop clusters bring their own challenges due to contradictory design principles. Hadoop is designed on the shared-nothing principle while cloud is based on the concepts of consolidation and resource sharing. Most of Hadoop's features are designed for on-premises data centers where the cluster topology is known. Hadoop depends on the rack assignment of servers (configured by the cluster administrator) to calculate the distance between servers. Hadoop calculates the distance between servers to find the best remote server from which to fetch data from when fetching non-local data. However, public cloud environment providers do not share rack information of virtual servers with their tenants. Lack of rack information of servers may allow Hadoop to fetch data from a remote server that is on the other side of the data center. To overcome this problem, we propose a delay distribution based scheme to find the closest server to fetch non-local data for public cloud-based Hadoop clusters. The proposed scheme bases server selection on the delay distributions between server pairs. Delay distribution is calculated measuring the round-trip time between servers periodically. Our experiments observe that the proposed scheme outperforms conventional Hadoop nearly by 12% in terms of non-local data fetch time. This reduction in data fetch time will lead to a reduction in job run time, especially in real-world multi-user clusters where non-local data fetching can happen frequently.

  • Experimental 5-Tb/s Packet-by-Packet Wavelength Switching System Using 2.5 -Gb/s 8-λ WDM Links

    Kimihiro YAMAKOSHI  Nobuaki MATSUURA  Kohei NAKAI  Eiji OKI  Naoaki YAMANAKA  Takaharu OHYAMA  Yuji AKAHORI  

     
    PAPER-Switching

      Vol:
    E85-B No:7
      Page(s):
    1293-1301

    We have developed an experimental 5-Tb/s packet-by-packet wavelength switching system, OPTIMA-2. This paper describes its hardware architecture. OPTIMA-2 is a non-blocking 3-stage switch using optical wavelength division multiplexing (WDM) links and dynamic bandwidth-sharing. A new scheduling algorithm for variable-length packets is used for the receiver ports of WDM links and simulation results show that it can suppress short-packet delay while keeping high throughput. An implementation of the WDM link using field programable gate arrays and a compact planar lightwave circuit platform is described. Experimental results for the basic operation of optical wavelength switching are also presented.

  • A Pipelined Maximal-Sized Matching Scheme for High-Speed Input-Buffered Switches

    Eiji OKI  Roberto ROJAS-CESSA  H. Jonathan CHAO  

     
    PAPER-Switching

      Vol:
    E85-B No:7
      Page(s):
    1302-1311

    This paper proposes an innovative Pipeline-based Maximal-sized Matching scheduling approach, called PMM, for input-buffered switches. It dramatically relaxes the limitation of a single time slot for completing a maximal matching into any number of time slots. In the PMM approach, arbitration is operated in a pipelined manner, where K subschedulers are used. Each subscheduler is allowed to take more than one time slot for its matching. Every time slot, one of the subschedulers provides the matching result. We adopt an extended version of Dual Round-Robin Matching (DRRM), called iterative DRRM (iDRRM), as a maximal matching algorithm in a subscheduler. PMM maximizes the efficiency of the adopted arbitration scheme by allowing sufficient time for the number of iterations. We show that PMM preserves 100% throughput under uniform traffic and fairness for best-effort traffic of the non-pipelined adopted algorithm, while ensuring that cells from the same virtual output queue (VOQ) are transmitted in sequence. In addition, we confirm that the delay performance of PMM is not significantly degraded by increasing the pipeline degree, or the number of subschedulers, when the number of outstanding requests for each subscheduler from a VOQ is limited to 1.

  • A High-Speed ATM Switch Based on Scalable Distributed Arbitration

    Eiji OKI  Naoaki YAMANAKA  

     
    LETTER-Switching and Communication Processing

      Vol:
    E80-B No:9
      Page(s):
    1372-1376

    This paper proposes a high-speed crosspoint-buffer-type ATM switch, named Scalable-Distributed -Arbitration (SDA) switch. The SDA switch employs a new arbitration scheme that allows the switch to be scalable. The SDA switch has a crosspoint buffer and a transit buffer at every crosspoint. Arbitration is executed between the crosspoint buffer and the transit buffer. The arbitration selects a cell based on delay time using a synchronous counter. The selected cell is transferred from a crosspoint buffer to the output port by way of several transit buffers. Since arbitration is executed in a distributed manner at each crosspoint and the arbitration time does not depend on the switch size, the SDA switch can be expanded to realize large throughput. Numerical results show that the SDA switch ensures fairness in terms of delay time. In addition, the maximum delay time and the required crosspoint buffer size of the SDA switch are reduced, compared with those in the conventional switch based on ring arbitration. Thus, the proposed SDA switch based on the new arbitration scheme has a simple and expandable architecture,and will be suitable for future high-speed multimedia ATM networks.

  • Heuristic Approach to Distributed Server Allocation with Preventive Start-Time Optimization against Server Failure

    Souhei YANASE  Shuto MASUDA  Fujun HE  Akio KAWABATA  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2021/02/01
      Vol:
    E104-B No:8
      Page(s):
    942-950

    This paper presents a distributed server allocation model with preventive start-time optimization against a single server failure. The presented model preventively determines the assignment of servers to users under each failure pattern to minimize the largest maximum delay among all failure patterns. We formulate the proposed model as an integer linear programming (ILP) problem. We prove the NP-completeness of the considered problem. As the number of users and that of servers increase, the size of ILP problem increases; the computation time to solve the ILP problem becomes excessively large. We develop a heuristic approach that applies simulated annealing and the ILP approach in a hybrid manner to obtain the solution. Numerical results reveal that the developed heuristic approach reduces the computation time by 26% compared to the ILP approach while increasing the largest maximum delay by just 3.4% in average. It reduces the largest maximum delay compared with the start-time optimization model; it avoids the instability caused by the unnecessary disconnection permitted by the run-time optimization model.

  • Network Optimization for Energy Saving Considering Link Failure with Uncertain Traffic Conditions

    Ravindra Sandaruwan RANAWEERA  Ihsen Aziz OUÉDRAOGO  Eiji OKI  

     
    PAPER-Network

      Vol:
    E97-B No:12
      Page(s):
    2729-2738

    The energy consumption of the Internet has a huge impact on the world economy and it is likely to increase every year. In present backbone networks, pairs of nodes are connected by “bundles” of multiple physical cables that form one logical link and energy saving can be achieved by shutting down unused network resources. The hose model can support traffic demand variations among node pairs in different time periods because it accommodates multiple traffic matrices unlike the pipe model which supports only one traffic matrix. This paper proposes an OSPF (Open Shortest Path First) link weight optimization scheme to reduce the network resources used for the hose model considering single link failures. The proposed scheme employs a heuristic algorithm based on simulated annealing to determine a suitable set of link weights to reduce the worst-case total network resources used, and considering any single link failure preemptively. It efficiently selects the worst-case performance link-failure topology and searches for a link weight set that reduces the worst-case total network resources used. Numerical results show that the proposed scheme is more effective in the reduction of worst-case total network resources used than the conventional schemes, Start-time Optimization and minimum hop routing.

  • Scalable Active Optical Access Network Using Variable High-Speed PLZT Optical Switch/Splitter

    Kunitaka ASHIZAWA  Takehiro SATO  Kazumasa TOKUHASHI  Daisuke ISHII  Satoru OKAMOTO  Naoaki YAMANAKA  Eiji OKI  

     
    PAPER

      Vol:
    E95-B No:3
      Page(s):
    730-739

    This paper proposes a scalable active optical access network using high-speed Plumbum Lanthanum Zirconate Titanate (PLZT) optical switch/splitter. The Active Optical Network, called ActiON, using PLZT switching technology has been presented to increase the number of subscribers and the maximum transmission distance, compared to the Passive Optical Network (PON). ActiON supports the multicast slot allocation realized by running the PLZT switch elements in the splitter mode, which forces the switch to behave as an optical splitter. However, the previous ActiON creates a tradeoff between the network scalability and the power loss experienced by the optical signal to each user. It does not use the optical power efficiently because the optical power is simply divided into 0.5 to 0.5 without considering transmission distance from OLT to each ONU. The proposed network adopts PLZT switch elements in the variable splitter mode, which controls the split ratio of the optical power considering the transmission distance from OLT to each ONU, in addition to PLZT switch elements in existing two modes, the switching mode and the splitter mode. The proposed network introduces the flexible multicast slot allocation according to the transmission distance from OLT to each user and the number of required users using three modes, while keeping the advantages of ActiON, which are to support scalable and secure access services. Numerical results show that the proposed network dramatically reduces the required number of slots and supports high bandwidth efficiency services and extends the coverage of access network, compared to the previous ActiON, and the required computation time for selecting multicast users is less than 30 msec, which is acceptable for on-demand broadcast services.

  • Migration Model for Distributed Server Allocation

    Souhei YANASE  Fujun HE  Haruto TAKA  Akio KAWABATA  Eiji OKI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2022/07/05
      Vol:
    E106-B No:1
      Page(s):
    44-56

    This paper proposes a migration model for distributed server allocation. In distributed server allocation, each user is assigned to a server to minimize the communication delay. In the conventional model, a user cannot migrate to another server to avoid instability. We develop a model where each user can migrate to another server while receiving services. We formulate the proposed model as an integer linear programming problem. We prove that the considered problem is NP-complete. We introduce a heuristic algorithm. Numerical result shows that the proposed model reduces the average communication delay by 59% compared to the conventional model at most.

  • ConSet: Hierarchical Concurrent Path Setup Scheme in Multi-Layer GMPLS Networks

    Eiji OKI  Daisaku SHIMAZAKI  Kohei SHIOMOTO  Naoaki YAMANAKA  

     
    LETTER-Network

      Vol:
    E87-B No:10
      Page(s):
    3107-3110

    This letter proposes a hierarchical label-switched path (LSP) setup scheme, called ConSet, for multi-layer generalized multi-protocol label switching (GMPLS) networks. ConSet allows a Path message to be transmitted to the downstream neighbor node without waiting for the establishment of the higher-order LSP. Confirmation of the establishment of the higher-order LSP is performed at the ingress node of the higher-order LSP before a Resv message of the lower-order LSP is transmitted to the upstream neighbor node. ConSet is able to set up hierarchical LSPs faster than the sequential scheme.

  • Extended Algorithm for Calculating Routes with Include Route Constraint in IP Networks

    Rie HAYASHI  Eiji OKI  Kohei SHIOMOTO  

     
    LETTER-Network

      Vol:
    E90-B No:12
      Page(s):
    3677-3679

    This paper proposes an algorithm for calculating routes that considers the include route constraint while minimizing cost. A route with include route constraint has to traverse a group of assigned nodes. The trouble when calculating a route that satisfies an include route constraint is that routes set in different sections may traverse the same link. In order to prevent this violation (overlap), we introduce an alternate route selection policy. Numerical results show that the probability of finding appropriate routes (no overlap) is more than 95% with the proposed algorithm while only 35% with the conventional algorithm.

  • Network Congestion Minimization Models Based on Robust Optimization

    Bimal CHANDRA DAS  Satoshi TAKAHASHI  Eiji OKI  Masakazu MURAMATSU  

     
    PAPER-Network

      Pubricized:
    2017/09/14
      Vol:
    E101-B No:3
      Page(s):
    772-784

    This paper introduces robust optimization models for minimization of the network congestion ratio that can handle the fluctuation in traffic demands between nodes. The simplest and widely used model to minimize the congestion ratio, called the pipe model, is based on precisely specified traffic demands. However, in practice, network operators are often unable to estimate exact traffic demands as they can fluctuate due to unpredictable factors. To overcome this weakness, we apply robust optimization to the problem of minimizing the network congestion ratio. First, we review existing models as robust counterparts of certain uncertainty sets. Then we consider robust optimization assuming ellipsoidal uncertainty sets, and derive a tractable optimization problem in the form of second-order cone programming (SOCP). Furthermore, we take uncertainty sets to be the intersection of ellipsoid and polyhedral sets, and considering the mirror subproblems inherent in the models, obtain tractable optimization problems, again in SOCP form. Compared to the previous model that assumes an error interval on each coordinate, our models have the advantage of being able to cope with the total amount of errors by setting a parameter that determines the volume of the ellipsoid. We perform numerical experiments to compare our SOCP models with the existing models which are formulated as linear programming problems. The results demonstrate the relevance of our models in terms of congestion ratio and computation time.

  • Robust Optimization Model for Primary and Backup Capacity Allocations against Multiple Physical Machine Failures under Uncertain Demands in Cloud

    Mitsuki ITO  Fujun HE  Kento YOKOUCHI  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2022/07/05
      Vol:
    E106-B No:1
      Page(s):
    18-34

    This paper proposes a robust optimization model for probabilistic protection under uncertain capacity demands to minimize the total required capacity against multiple simultaneous failures of physical machines. The proposed model determines both primary and backup virtual machine allocations simultaneously under the probabilistic protection guarantee. To express the uncertainty of capacity demands, we introduce an uncertainty set that considers the upper bound of the total demand and the upper and lower bounds of each demand. The robust optimization technique is applied to the optimization model to deal with two uncertainties: failure event and capacity demand. With this technique, the model is formulated as a mixed integer linear programming (MILP) problem. To solve larger sized problems, a simulated annealing (SA) heuristic is introduced. In SA, we obtain the capacity demands by solving maximum flow problems. Numerical results show that our proposed model reduces the total required capacity compared with the conventional model by determining both primary and backup virtual machine allocations simultaneously. We also compare the results of MILP, SA, and a baseline greedy algorithm. For a larger sized problem, we obtain approximate solutions in a practical time by using SA and the greedy algorithm.

  • QoS Control Mechanism Based on Real-Time Measurement of Elephant Flows

    Rie HAYASHI  Takashi MIYAMURA  Eiji OKI  Kohei SHIOMOTO  

     
    PAPER-Network

      Vol:
    E90-B No:8
      Page(s):
    2081-2089

    This proposes a scalable QoS control scheme, called Elephant Flow Control Scheme (EFCS) for high-speed large-capacity networks; it controls congestion and provides appropriate bandwidth to normal users' flows by controlling just the elephant flows. EFCS introduces a sampling packet threshold and drops packets considering flow size. EFCS also adopts a compensation parameter to control elephant flows to an appropriate level. Numerical results show that the sampling threshold increases control accuracy by 20% while reducing the amount of memory needed for packet sampling by 60% amount of memory by packet sampling; the elephant flows are controlled as intended by the compensation parameter. As a result, EFCS provides sufficient bandwidth to normal TCP flows in a scalable manner.

  • Implementation and Experiments of Path Computation Element Based Backbone Network Architecture

    Tomonori TAKEDA  Eiji OKI  Ichiro INOUE  Kohei SHIOMOTO  Kazuhiro FUJIHARA  Shin-Ichi KATO  

     
    LETTER-Fiber-Optic Transmission for Communications

      Vol:
    E91-B No:8
      Page(s):
    2704-2706

    This paper proposes the Path Computation Element (PCE)-based backbone network architecture and verifies its feasibility through implementation and experiments. PCE communication Protocol (PCEP) is implemented for communication between the PCE and the management system to control and manage Generalized Multi-Protocol Label Switching (GMPLS)-based backbone networks.

  • Bidirectional Path Setup Scheme Using on Upstream Label Set in Optical GMPLS Networks

    Eiji OKI  Nobuaki MATSUURA  Kohei SHIOMOTO  Naoaki YAMANAKA  

     
    PAPER-Network

      Vol:
    E87-B No:6
      Page(s):
    1569-1576

    Generalized Multi-Protocol Label Switching (GMPLS) is being developed in the Internet Engineering Task Force (IETF). In GMPLS-based wavelength-division-multiplexing (WDM) optical networks, a wavelength in a fiber is used as a label. In the existing GMPLS signaling protocol for bidirectional paths in WDM networks with the wavelength continuity constraint, bidirectional path setup fails with high probability because the upstream label allocated by the previous hop node may not be accepted at the transit node. To solve this problem, this paper proposes an efficient bidirectional label switched path (LSP) setup scheme based on an upstream label set. Called the Upstream Label Set (ULS) scheme, it is an extension of the existing GMPLS signaling protocol. The ULS scheme is consistent with the existing GMPLS signaling procedure and so offers backward compatibility. The numerical results suggest that when the number of the LSP setup retries is limited, the ULS scheme offers lower blocking probability than the existing GMPLS signaling scheme which uses only with the upstream label (UL). In addition, under the condition that the constraint of the number of LSP setup retries is relaxed, the LSP setup time of the ULS scheme is faster than that of the existing scheme. Furthermore, by using our developed prototype of the GMPLS control system, in which the ULS scheme was installed, we demonstrated that the ULS scheme successfully setup bidirectional LSPs.

  • Latest Trends in Traffic Matrix Modeling and Its Application to Multilayer TE

    Rie HAYASHI  Takashi MIYAMURA  Daisaku SHIMAZAKI  Eiji OKI  Kohei SHIOMOTO  

     
    SURVEY PAPER-Traffic Engineering and Multi-Layer Networking

      Vol:
    E90-B No:8
      Page(s):
    1912-1921

    We survey traffic matrix models, whose elements represent the traffic demand between source-destination pair nodes. Modeling the traffic matrix is useful for multilayer Traffic Engineering (TE) in IP optical networks. Multilayer TE techniques make the network so designed flexible and reliable. This is because it allows reconfiguration of the virtual network topology (VNT), which consists of a set of several lower-layer (optical) paths and is provided to the higher layer, in response to fluctuations (diurnal) in traffic demand. It is, therefore, important to synthetically generate traffic matrices as close to the real ones as possible to maximize the performance of multilayer TE. We compare several models and clarify their applicability to VNT design and control. We find that it is difficult in practice to make an accurate traffic matrix with conventional schemes because of the high cost for data measurement and the complicated calculations involved. To overcome these problems, we newly introduce a simplified traffic matrix model that is practical; it well mirrors real networks. Next, this paper presents our developed server, the IP Optical TE server. It performs multilayer TE in IP optical networks. We evaluate the effectiveness of multilayer TE using our developed IP Optical server and the simplified traffic matrix. We confirm that multilayer TE offers significant CAPEX savings. Similarly, we demonstrate basic traffic control in IP optical networks, and confirm the dynamic control of the network and the feasibility of the IP Optical TE server.

  • Connection Setup Signaling Scheme with Flooding-Based Path Searching for Diverse-Metric Network

    Ko KIKUTA  Daisuke ISHII  Satoru OKAMOTO  Eiji OKI  Naoaki YAMANAKA  

     
    PAPER-Network

      Vol:
    E95-B No:8
      Page(s):
    2600-2609

    Connection setup on various computer networks is now achieved by GMPLS. This technology is based on the source-routing approach, which requires the source node to store metric information of the entire network prior to computing a route. Thus all metric information must be distributed to all network nodes and kept up-to-date. However, as metric information become more diverse and generalized, it is hard to update all information due to the huge update overhead. Emerging network services and applications require the network to support diverse metrics for achieving various communication qualities. Increasing the number of metrics supported by the network causes excessive processing of metric update messages. To reduce the number of metric update messages, another scheme is required. This paper proposes a connection setup scheme that uses flooding-based signaling rather than the distribution of metric information. The proposed scheme requires only flooding of signaling messages with requested metric information, no routing protocol is required. Evaluations confirm that the proposed scheme achieves connection establishment without excessive overhead. Our analysis shows that the proposed scheme greatly reduces the number of control messages compared to the conventional scheme, while their blocking probabilities are comparable.

  • User-Programmable Flexible ATM Network Architecture, Active-ATM

    Naoaki YAMANAKA  Eiji OKI  Haruhisa HASEGAWA  Thomas M. CHEN  

     
    LETTER-Communication Networks and Services

      Vol:
    E81-B No:11
      Page(s):
    2233-2236

    This article proposes active-ATM, a flexible, simple and cost-effective ATM-WAN architecture that can handle multiple user-customized ATM-layer protocols, such as ABR and ABT, by using a simple universal ATM transit network. The proposed active-ATM architecture enables the construction of flexible networks that can evolve easily. With active-ATM and the ATM multi-protocol emulation network architecture called ALPEN, it is easy to implement new ATM-layer protocols by using user-created programs called active-program capsules that modify only the edge nodes. Because these user-sent program capsules can be used to quickly customize the edge nodes, there is no waiting for standardization and implementation of new services. The ATM-layer protocols are emulated only at the edge nodes, making the transit network independent of customer ATM-layer protocols. The active-ATM edge node is based on the flexible programmable node architecture called PUN(programmable unified node). The PUN is a platform for user-programmable ATM-layer services; it is achieved by using programmable devices, such as FPGAs and DSPs. An prototype system has demonstrated the flexibility of the resulting ATM network. The active-ATM architecture is an efficient approach to implementing multimedia, multi-protocol ATM services in an ATM WAN.

  • Call Admission Control Scheme Based on Statistical Information

    Takayuki FUJIWARA  Eiji OKI  Kohei SHIOMOTO  

     
    LETTER-Network

      Vol:
    E92-B No:4
      Page(s):
    1361-1364

    A call admission control (CAC) scheme based on statistical information is proposed, called the statistical CAC scheme. A conventional scheme needs to manage session information for each link to update the residual bandwidth of a network in real time. This scheme has a scalability problem in terms of network size. The statistical CAC rejects session setup requests in accordance to a pre-computed ratio, called the rejection ratio. The rejection ratio is computed by using statistical information about the bandwidth requested for each link so that the congestion probability is less than an upper bound specified by a network operator. The statistical CAC is more scalable in terms of network size than the conventional scheme because it does not need to keep accommodated session state information. Numerical results show that the statistical CAC, even without exact session state information, only slightly degrades network utilization compared with the conventional scheme.

  • Fault-Tolerant Controller Placement Model by Distributing Switch Load among Multiple Controllers in Software-Defined Network

    Seiki KOTACHI  Takehiro SATO  Ryoichi SHINKUMA  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2021/12/01
      Vol:
    E105-B No:5
      Page(s):
    533-544

    One of the features of a software-defined network (SDN) is a logically centralized control plane hosting one or more SDN controllers. As SDN controller placement can impact network performance, it is widely studied as the controller placement problem (CPP). For a cost-effective network design, network providers need to minimize the number of SDN controllers used in the network since each SDN controller incurs installation and maintenance costs. Moreover, the network providers need to deal with the failure of SDN controllers. Existing studies that consider SDN controller failures use the scheme of connecting each SDN switch to one Master controller and one or more Slave controllers. The problem with this scheme is that the computing capacity of each SDN controller cannot be used efficiently since one SDN controller handles the load of all SDN switches connected to it. The number of SDN controllers required can be reduced by distributing the load of each SDN switch among multiple SDN controllers. This paper proposes a controller placement model that allows the distribution against SDN controller failures. The proposed model determines the ratios of computing capacity demanded by each SDN switch on the SDN controllers connected to it. The proposed model also determines the number and placement of SDN controllers and the assignment of each SDN switch to SDN controllers. Controller placement is determined so that a network provider can continue to manage all SDN switches if no more than a certain number of SDN controller failures occur. We develop two load distribution methods: split and even-split. We formulate the proposed model with each method as integer linear programming problems. Numerical results show that the proposed model reduces the number of SDN controllers compared to a benchmark model; the maximum reduction ratio is 38.8% when the system latency requirement between an SDN switch and an SDN controller is 100[ms], the computing capacity of each SDN controller is 6 × 106[packets/s], and the maximum number of SDN controllers that can fail at the same time is one.

41-60hit(86hit)