The search functionality is under construction.

Author Search Result

[Author] Hidenori NAKAZATO(13hit)

1-13hit
  • Intrusion Detection by Monitoring System Calls with POSIX Capabilities

    Takahiro HARUYAMA  Hidenori NAKAZATO  Hideyoshi TOMINAGA  

     
    PAPER

      Vol:
    E90-B No:10
      Page(s):
    2646-2654

    Existing anomaly intrusion detection that monitors system calls has two problems: vast false positives and lack of risk information on detection. In order to solve the two problems, we propose an intrusion detection method called "Callchains." Callchains reduces the false positives of existing anomaly intrusion detection by restricting monitoring to the activities with process capabilities prescribed by POSIX 1003.1e. Additionally, Callchains provides an administrator information of used POSIX capabilities in sytem call execution as an indicator of risk. This paper shows Callchains' design, its implementation, and experimental results comparing Callchains with existing approaches.

  • Analysis of Divisible Load Scheduling with Result Collection on Heterogeneous Systems

    Abhay GHATPANDE  Hidenori NAKAZATO  Olivier BEAUMONT  Hiroshi WATANABE  

     
    PAPER-Network

      Vol:
    E91-B No:7
      Page(s):
    2234-2243

    Divisible Load Theory (DLT) is an established framework to study Divisible Load Scheduling (DLS). Traditional DLT ignores the result collection phase, and specifies no solution to the general case where both the network speed and computing capacity of the nodes are heterogeneous. In this paper, the DLS with Rosult Collection on HETerogemeous Systems (DLSRCHETS) problem is formulated as a linear program and analyzed. The papers to date that have dealt with result collection, proposed simplistic LIFO (Last In, First Out) and FIFO (First In, First Out) type of schedules as solutions. The main contributions of this paper are: (a) A proof of the Allocation Precedence Condition, which is inconsequential in LIFO or FIFO, but is important in a general schedule. (b) A proof of the Idle Time Theorem, which states that irrespective of whether load is allocated to all available processors, in the optimal solution to the DLSRCHETS problem, at the most one processor that is allocated load has idle time, and that the idle time exists only when the result collection begins immediately after the completion of load distribution.

  • EDITORS' ADDRESS

    Hideyoshi TOMINAGA  Hidenori NAKAZATO  Naoaki YAMANAKA  

     
    EDITORS' ADDRESS

      Vol:
    E82-B No:5
      Page(s):
    675-676
  • SPORT: An Algorithm for Divisible Load Scheduling with Result Collection on Heterogeneous Systems

    Abhay GHATPANDE  Hidenori NAKAZATO  Olivier BEAUMONT  Hiroshi WATANABE  

     
    PAPER-Network

      Vol:
    E91-B No:8
      Page(s):
    2571-2588

    Divisible Load Theory (DLT) is an established mathematical framework to study Divisible Load Scheduling (DLS). However, traditional DLT does not address the scheduling of results back to source (i.e., result collection), nor does it comprehensively deal with system heterogeneity. In this paper, the DLSRCHETS (DLS with Result Collection on HETerogemeous Systems) problem is addressed. The few papers to date that have dealt with DLSRCHETS, proposed simplistic LIFO (Last In, First Out) and FIFO (First In, First Out) type of schedules as solutions to DLSRCHETS. In this paper, a new polynomial time heuristic algorithm, SPORT (System Parameters based Optimized Result Transfer), is proposed as a solution to the DLSRCHETS problem. With the help of simulations, it is proved that the performance of SPORT is significantly better than existing algorithms. The other major contributions of this paper include, for the first time ever, (a) the derivation of the condition to identify the presence of idle time in a FIFO schedule for two processors, (b) the identification of the limiting condition for the optimality of FIFO and LIFO schedules for two processors, and (c) the introduction of the concept of equivalent processor in DLS for heterogeneous systems with result collection.

  • Autonomous IP Fast Rerouting with Compressed Backup Flow Entries Using OpenFlow

    Shohei KAMAMURA  Daisaku SHIMAZAKI  Atsushi HIRAMATSU  Hidenori NAKAZATO  

     
    PAPER

      Vol:
    E96-D No:2
      Page(s):
    184-192

    This paper proposes an IP fast rerouting method which can be implemented in OpenFlow framework. While the current IP is robust, its reactive and global rerouting processes require the long recovery time against failure. On the other hand, IP fast rerouting provides a milliseconds-order recovery time by proactive and local restoration mechanism. Implementation of IP fast rerouting is not common in real systems, however; it requires the coordination of additional forwarding functions to a commercial hardware. We propose an IP fast rerouting mechanism using OpenFlow that separates control function from hardware implementation. Our mechanism does not require any extension of current forwarding hardware. On the contrary, increase of backup routes becomes main overhead of our proposal. We also embed the compression mechanism to our IP fast rerouting mechanism. We show the effectiveness of our IP fast rerouting in terms of the fast restoration and the backup routes compression effect through computer simulations.

  • Network Coder Placement for Peer-to-Peer Content Distribution

    Dinh NGUYEN  Hidenori NAKAZATO  

     
    PAPER

      Vol:
    E96-B No:7
      Page(s):
    1661-1669

    We study the use of network coding to speed up content distribution in peer-to-peer (P2P) networks. Our goal is to get the underlying reason for network coding's improved performance in P2P content distribution and to optimize resource consumption of network coding. We observe analytically and experimentally that in pure P2P networks, a considerable amount of data is sent multiple times from one peer to another when there are multiple paths connecting those two particular peers. Network coding, on the other hand, when applied at upstream peers, eliminates information duplication on paths to downstream peers, which results in more efficient content distribution. Based on that insight, we propose a network coder placement algorithm which achieves comparable distribution time as network coding, yet substantially reduces the number of encoders compared to a pure network coding solution in which all peers have to encode. Our placement method puts encoders at critical network positions to eliminate information duplication the most, thus, effectively shortens distribution time with just a portion of encoders.

  • D2EcoSys: Decentralized Digital Twin EcoSystem Empower Co-Creation City-Level Digital Twins Open Access

    Kenji KANAI  Hidehiro KANEMITSU  Taku YAMAZAKI  Shintaro MORI  Aram MINE  Sumiko MIYATA  Hironobu IMAMURA  Hidenori NAKAZATO  

     
    INVITED PAPER

      Pubricized:
    2023/10/26
      Vol:
    E107-B No:1
      Page(s):
    50-62

    A city-level digital twin is a critical enabling technology to construct a smart city that helps improve citizens' living conditions and quality of life. Currently, research and development regarding the digital replica city are pursued worldwide. However, many research projects only focus on creating the 3D city model. A mechanism to involve key players, such as data providers, service providers, and application developers, is essential for constructing the digital replica city and producing various city applications. Based on this motivation, the authors of this paper are pursuing a research project, namely Decentralized Digital Twin EcoSystem (D2EcoSys), to create an ecosystem to advance (and self-grow) the digital replica city regarding time and space directions, city services, and values. This paper introduces an overview of the D2EcoSys project: vision, problem statement, and approach. In addition, the paper discusses the recent research results regarding networking technologies and demonstrates an early testbed built in the Kashiwa-no-ha smart city.

  • Loop-Free IP Fast Rerouting Considering Double-Link Failures

    Shohei KAMAMURA  Daisaku SHIMAZAKI  Atsushi HIRAMATSU  Hidenori NAKAZATO  

     
    PAPER-Network

      Vol:
    E95-B No:12
      Page(s):
    3811-3821

    IP fast rerouting has widely been studied for realizing millisecond-order recovery on pure IP networks. This paper proposes IP fast rerouting using backup topologies against concurrent double failures. The main issue in recovering from multiple failures is avoiding forwarding loops. To avoid forwarding loops, we propose a deterministic forwarding algorithm, which estimates the concurrently occurring failures from the packet header information. We also propose an efficient backup topology design algorithm which is both loop-free and which reduces the number of backup topologies. Our key idea is preparing the adequate diversity of backup routes for arbitrary source and destination pairs by combination of backup topologies. For efficient computation of diverse routes, we propose a similarity comparison-based algorithm between the original topology and the backup topologies. Our algorithm can achieve nearly optimal loop-free restoration from double failures on realistic topologies without explicit failure notification.

  • Two-Level Popularity-Oriented Cache Replacement Policy for Video Delivery over CCN

    Haipeng LI  Hidenori NAKAZATO  

     
    PAPER

      Vol:
    E99-B No:12
      Page(s):
    2532-2540

    We introduce a novel cache replacement policy to improve the entire network performance of video delivery over content-centric networking (CCN). In the case of the CCN structure, we argue that: 1) for video multiplexing scenario, general cache strategies that ignore the intrinsic linear time characteristic of video requests are unable to make better use of the cache resources, and 2) it is inadequate to simply extend the existing research conclusions of file-oriented popularity to chunk-by-chunk popularity, which are widely used in CCN. Unlike previous works in this field, the proposed policy in this study, named two-level popularity-oriented time-to-hold cache replacement policy (TLP-TTH), is designed on the basis of the following principles. Firstly, the proposed cache replacement strategy is customized for video delivery by carefully considering the essential auto-correlated request feature of video chunks within a video file. Furthermore, the popularity in video delivery is subdivided into two levels, namely chunk-level access probability and file-level popularity, in order to efficiently utilize cache resources. We evaluated the proposed policy in both a hierarchical topology and a real network based hybrid topology, and took viewers departure into consideration as well. The results validate that for video delivery over CCN, TLP-TTH policy improves the network performance from several aspects. In particular, we observed that the proposed policy not only increases the cache hit ratio at the edge of the network but the cache utilization at the intermediate routers is also improved markedly. Further, with respect to the video popularity variation scenario, the cache hit ratio of TLP-TTH policy responds sensitively to maintain efficient cache utilization.

  • Efficient Producer Mobility Support in Named Data Networking

    Siran ZHANG  Zhiwei YAN  Yong-Jin PARK  Hidenori NAKAZATO  Wataru KAMEYAMA  Kashif NISAR  Ag Asri Ag IBRAHIM  

     
    PAPER-Network

      Pubricized:
    2017/04/06
      Vol:
    E100-B No:10
      Page(s):
    1856-1864

    Named Data Networking (NDN) is a promising architecture for the future Internet and it is mainly designed for efficient content delivery and retrieval. However, producer mobility support is one of the challenging problems of NDN. This paper proposes a scheme which aims to optimize the tunneling-based producer mobility solution in NDN. It does not require NDN routers to change their routing tables (Forwarding Information Base) after a producer moves. Instead, the Interest packet can be sent from a consumer to the moved producer using the tunnel. The piggybacked Data packet which is sent back to the consumer will trigger the consumer to send the following Interest packets through the optimized path to the producer. Moreover, a naming scheme is proposed so that the NDN caching function can be fully utilized. An analysis is carried out to evaluate the performance of the proposal. The results indicate that the proposed scheme reduces the network cost compared to related works and supports route optimization for enhanced producer mobility support in NDN.

  • Design of Reconfigurable Lightpaths in IP over WDM Networks

    Hiroaki HARAI  Fumito KUBOTA  Hidenori NAKAZATO  

     
    PAPER

      Vol:
    E83-B No:10
      Page(s):
    2234-2244

    The forwarding speed of IP routers must grow to accommodate the skyrocketing amount of traffic on the Internet. MPLS, which relies on the high processing power of lower layers, is a solution and it is under developing. On the other hand, a WDM network has been expected as a high-speed network, but it is also called a stupid network because of lacking its traffic granularity. In order to bridge between these two layers, an IP over WDM network by a concept of MPLS has been proposed. This network has a potential to effectively use large transmission capacity provided by WDM technology. In this paper, we design IP over WDM networks that reconfigure IP routing and lightpaths each day or month. We formulate a problem that maximizes the network throughput based on integer linear programming. Through numerical examples, we show that the increase of the network throughput in IP over WDM networks is larger than that of IP networks. We also show the area where this method is applicable to the reconfigurable network.

  • FOREWORD Open Access

    Hidenori NAKAZATO  

     
    FOREWORD

      Vol:
    E101-B No:8
      Page(s):
    1752-1752
  • Call Admission Control on Single Node Networks under Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) Scheduler

    Masaki HANADA  Hidenori NAKAZATO  Hitoshi WATANABE  

     
    PAPER-Network

      Vol:
    E95-B No:2
      Page(s):
    401-414

    Multimedia applications such as music or video streaming, video teleconferencing and IP telephony are flourishing in packet-switched networks. Applications that generate such real-time data can have very diverse quality-of-service (QoS) requirements. In order to guarantee diverse QoS requirements, the combined use of a packet scheduling algorithm based on Generalized Processor Sharing (GPS) and leaky bucket traffic regulator is the most successful QoS mechanism. GPS can provide a minimum guaranteed service rate for each session and tight delay bounds for leaky bucket constrained sessions. However, the delay bounds for leaky bucket constrained sessions under GPS are unnecessarily large because each session is served according to its associated constant weight until the session buffer is empty. In order to solve this problem, a scheduling policy called Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) was proposed in [17]. ORC-GPS is a rate-based scheduling like GPS, and controls the service rate in order to lower the delay bounds for leaky bucket constrained sessions. In this paper, we propose a call admission control (CAC) algorithm for ORC-GPS, for leaky-bucket constrained sessions with deterministic delay requirements. This CAC algorithm for ORC-GPS determines the optimal values of parameters of ORC-GPS from the deterministic delay requirements of the sessions. In numerical experiments, we compare the CAC algorithm for ORC-GPS with one for GPS in terms of schedulable region and computational complexity.