Yoshinori KITATSUJI Katsuyuki YAMAZAKI Masato TSURU Yuji OIE
There is an emerging requirement for real-time flow-based traffic monitoring, which is vital to detecting and/or tracing DoS attacks as well as troubleshooting and traffic engineering in the ISP networks. We propose the architecture for a scalable real-time flow measurement tool in order to allow operators to flexibly define "the targeted flows" on-demand, to obtain various statistics on those flows, and to visualize them in a real-time manner. A traffic distribution device and multiple traffic capture devices processing packets in parallel are included in the architecture, in which the former device copies traffic and distributes it to the latter devices. We evaluate the performance of a proto-type implementation on PC-UNIX in testbed experiments to demonstrate the scalability of our architecture. The evaluation shows that the performance increases in proportion to the number of the capture devices and the maximum performance reaches 80 K pps with six capture devices. Finally we also show applications of our tool, which indicate the advantage of flexible fine-grained flow measurements.
Masato TSURU Mineo TAKAI Shigeru KANEDA Agussalim Rabenirina AINA TSIORY
In the evolution of wireless networks such as wireless sensor networks, mobile ad-hoc networks, and delay/disruption tolerant networks, the Store-Carry-Forward (SCF) message relaying paradigm has been commonly featured and studied with much attention. SCF networking is essential for offsetting the deficiencies of intermittent and range limited communication environments because it allows moving wireless communication nodes to act as “mobile relay nodes”. Such relay nodes can store/carry/process messages, wait for a better opportunity for transmission, and finally forward the messages to other nodes. This paper starts with a short overview of SCF routing and then examines two SCF networking scenarios. The first one deals with large content delivery across multiple islands using existing infrastructural transportation networks (e.g., cars and ferries) in which mobility is uncontrollable from an SCF viewpoint. Simulations show how a simple coding technique can improve flooding-based SCF. The other scenario looks at a prototype system of unmanned aerial vehicle (UAV) for high-quality video surveillance from the sky in which mobility is partially controllable from an SCF viewpoint. Three requisite techniques in this scenario are highlighted - fast link setup, millimeter wave communications, and use of multiple links. Through these examples, we discuss the benefits and issues of the practical use of SCF networking-based systems.
Kenichi YOSHIDA Satoshi KATSUNO Shigehiro ANO Katsuyuki YAMAZAKI Masato TSURU
Network management is an important issue in maintaining the Internet as an important social infrastructure. Finding excessive consumption of network bandwidth caused by P2P mass flows is especially important. Finding Internet viruses is also an important security issue. Although stream mining techniques seem to be promising techniques to find P2P and Internet viruses, vast network flows prevent the simple application of such techniques. A mining technique which works well with extremely limited memory is required. Also it should have a real-time analysis capability. In this paper, we propose a cache based mining method to realize such a technique. By analyzing the characteristics of the proposed method with real Internet backbone flow data, we show the advantages of the proposed method, i.e. less memory consumption while realizing real-time analysis capability. We also show the fact that we can use the proposed method to find mass flow information from Internet backbone flow data.
Masato TSURU Tetsuya TAKINE Yuji OIE
In the Internet, because of huge scale and distributed administration, it is of practical importance to infer network-internal characteristics that cannot be measured directly. In this paper, based on a general framework we proposed previously, we present a feasible method of inferring packet loss rates of individual links from end-to-end measurement of unicast probe packets. Compared with methods using multicast probes, unicast-based inference methods are more flexible and widely applicable, whereas they have a problem with imperfect correlation in concurrent events on paths. Our method can infer link loss rates under this problem, and is applicable to various path-topologies including trees, inverse trees and their combinations. We also show simulation results which indicate potential of our unicast-based method.
Masayoshi SHIMAMURA Takeshi IKENAGA Masato TSURU
The explosive growth of the usage along with a greater diversification of communication technologies and applications imposes the Internet to manage further scalability and diversity, requiring more adaptive and flexible sharing schemes of network resources. Especially when a number of large-scale distributed applications concurrently share the resource, efficacy of comprehensive usage of network, computation, and storage resources is needed from the viewpoint of information processing performance. Therefore, a reconsideration of the coordination and partitioning of functions between networks (providers) and applications (users) has become a recent research topic. In this paper, we first address the need and discuss the feasibility of adaptive network services by introducing special processing nodes inside the network. Then, a design and an implementation of an advanced relay node platform are presented, by which we can easily prototype and test a variety of advanced in-network processing on Linux and off-the-shelf PCs. A key feature of the proposed platform is that integration between kernel and userland spaces enables to easily and quickly develop various advanced relay processing. Finally, on the top of the advanced relay node platform, we implement and test an adaptive packet compression scheme that we previously proposed. The experimental results show the feasibility of both the developed platform and the proposed adaptive packet compression.
Hiroshi YAMAMOTO Masato TSURU Katsuyuki YAMAZAKI Yuji OIE
In parallel computing systems using the master/worker model for distributed grid computing, as the size of handling data grows, the increase in the data transmission time degrades the performance. For divisible workload applications, therefore, multiple-round scheduling algorithms have been being developed to mitigate the adverse effect of longer data transmission time by dividing the data into chunks to be sent out in multiple rounds, thus overlapping the times required for computation and transmission. However, a standard multiple-round scheduling algorithm, Uniform Multi-Round (UMR), adopts a sequential transmission model where the master communicates with one worker at a time, thus the transmission capacity of the link attached to the master cannot be fully utilized due to the limits of worker-side capacity. In the present study, a Parallel Transferable Uniform Multi-Round algorithm (PTUMR) is proposed. It efficiently utilizes the data transmission capacity of network links by allowing chunks to be transmitted in parallel to workers. This algorithm divides workers into groups in a way that fully uses the link bandwidth of the master under some constraints and considers each group of workers as one virtual worker. In particular, introducing a Grouping Threshold effectively deals with very heterogeneous workers in both data transmission and computation capacities. Then, the master schedules sequential data transmissions to the virtual workers in an optimal way like in UMR. The performance evaluations show that the proposed algorithm achieves significantly shorter turnaround times (i.e., makespan) compared with UMR regardless of heterogeneity of workers, which are close to the theoretical lower limits.
Yan ZHANG Masato UCHIDA Masato TSURU Yuji OIE
We present a TCP flow level performance evaluation on error rate aware scheduling algorithms in Evolved UTRA and UTRAN networks. With the introduction of the error rate, which is the probability of transmission failure under a given wireless condition and the instantaneous transmission rate, the transmission efficiency can be improved without sacrificing the balance between system performance and user fairness. The performance comparison with and without error rate awareness is carried out dependant on various TCP traffic models, user channel conditions, schedulers with different fairness constraints, and automatic repeat request (ARQ) types. The results indicate that error rate awareness can make the resource allocation more reasonable and effectively improve the system and individual performance, especially for poor channel condition users.
Yoshinori KITATSUJI Satoshi KATSUNO Katsuyuki YAMAZAKI Masato TSURU Yuji OIE
The monitoring of performance in VoIP traffic has become vital because users generally expect VoIP service quality that is as high as that of PSTN services. A lightweight method of processing by extracting VoIP flows from Internet traffics is proposed in this paper. Estimating delay variations and the packet loss ratio using knowledge about specific features and the characteristics of VoIP flows, i.e., the inter-packet gap (IPG) which is constant in VoIP flows, is also proposed. Simulation with actual traffic trace is used to evaluate the method, and this revealed that delay variations (IPG variance) can be accurately estimated by monitoring only a few percentage of all flows. The proposed method can be used as a first-alert tool to monitor large amounts of flows to detect signs of degradation in VoIP flows. The method can be used by ISPs to estimate whether VoIP flow performance is adequate within their networks and at ingress from other ISPs.
Nguyen VIET HA Kazumi KUMAZOE Masato TSURU
The Transmission Control Protocol (TCP) with Network Coding (TCP/NC) was proposed to introduce packet loss recovery ability at the sink without TCP retransmission, which is realized by proactively sending redundant combination packets encoded at the source. Although TCP/NC is expected to mitigate the goodput degradation of TCP over lossy networks, the original TCP/NC does not work well in burst loss and time-varying channels. No apparent scheme was provided to decide and change the network coding-related parameters (NC parameters) to suit the diverse and changeable loss conditions. In this paper, a solution to support TCP/NC in adapting to mentioned conditions is proposed, called TCP/NC with Loss Rate and Loss Burstiness Estimation (TCP/NCwLRLBE). Both the packet loss rate and burstiness are estimated by observing transmitted packets to adapt to burst loss channels. Appropriate NC parameters are calculated from the estimated probability of successful recoverable transmission based on a mathematical model of packet losses. Moreover, a new mechanism for coding window handling is developed to update NC parameters in the coding system promptly. The proposed scheme is implemented and validated in Network Simulator 3 with two different types of burst loss model. The results suggest the potential of TCP/NCwLRLBE to mitigate the TCP goodput degradation in both the random loss and burst loss channels with the time-varying conditions.
Masato UCHIDA Kei OHNISHI Kento ICHIKAWA Masato TSURU Yuji OIE
In this paper we propose a file replication scheme inspired by a thermal diffusion phenomenon for storage load balancing in unstructured peer-to-peer (P2P) file sharing networks. The proposed scheme is designed such that the storage utilization ratios of peers will be uniform, in the same way that the temperature in a field becomes uniform in a thermal diffusion phenomenon. The proposed scheme creates replicas of files in peers probabilistically, where the probability is controlled by using parameters that can be used to find the trade-off between storage load balancing and search performance in unstructured P2P file sharing networks. First, we show through theoretical analysis that the statistical behavior of the storage load balancing controlled by the proposed scheme has an analogy with the thermal diffusion phenomenon. We then show through simulation that the proposed scheme not only has superior performance with respect to balancing the storage load among peers (the primary objective of the present proposal) but also allows the performance trade-off to be widely found. Finally, we qualitatively discuss a guideline for setting the parameter values in order to widely find the performance trade-off from the simulation results.
Masayoshi SHIMAMURA Hiroyuki KOGA Takeshi IKENAGA Masato TSURU
Introducing adaptive online data compression at network-internal nodes is considered for alleviating traffic congestion on the network. In this paper, we assume that advanced relay nodes, which possess both a relay function (network resource) and a processing function (computational and storage resources), are placed inside the network, and we propose an adaptive online lossless packet compression scheme utilized at these nodes. This scheme selectively compresses a packet according to its waiting time in the queue during congestion. Through preliminary investigation using actual traffic datasets, we investigate the compression ratio and processing time of packet-by-packet compression in actual network environments. Then, by means of computer simulations, we show that the proposed scheme reduces the packet delay time and discard rate and investigate factors necessary in achieving efficient packet relay.
Akira NAGATA Shinya YAMAMURA Masato TSURU
Motivated by the question of how to quickly transfer large files if multiple and heterogeneous networks are available but each has insufficient performance for a requested task, we propose a data transfer framework for integrating multiple and heterogeneous challenged access networks, in which long delays, heavy packet losses, and frequent disconnections are observed. An important feature of this framework is to transmit the control information separately from the transmission of data information, where they are flexibly transferred on different types of communication media (network paths) in different ways, and to provide a virtual single network path between the two nodes. We describe the design of the mechanisms of this framework such as the retransmission, the rate adjustment of each data flow, and the data-flow setup control. We validate a prototype implementation through two different experiments using terrestrial networks and a satellite communication system.
Masato TSURU Nobuo RYOKI Yuji OIE
The recent evolution on the network tomography have successfully provided principles and methodologies of inferring network-internal (local) characteristics solely from end-to-end measurements, which should be followed by deployment in practical use. In this paper, two kinds of user-oriented tools for inferring one-way packet losses based on the network tomography are proposed. They can infer one-way packet loss rates on paths or path segments from/to a user-host (a client) to/from a specified target host (an application server or a router) without any measurement on the target, and thus can find the congested area along the path between the client and an application server. One is a stand-alone tool running on the client, and the other is a client-server style tool running on both the client and a proxy measurement server distributed in the Internet. Prototypes of the tools have been developed and evaluated by experiments in the actual Internet environment, which shows that the tools can infer the loss rates within 1% errors in various network conditions.
Masato UCHIDA Shuichi NAWATA Yu GU Masato TSURU Yuji OIE
We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose – the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.
Suguru YOSHIMIZU Hiroyuki KOGA Katsushi KOUYAMA Masayoshi SHIMAMURA Kazumi KUMAZOE Masato TSURU
With the emergence of bandwidth-greedy application services, high-speed transport protocols are expected to effectively and aggressively use large amounts of bandwidth in current broadband and multimedia networks. However, when high-speed transport protocols compete with other standard TCP flows, they can occupy most of the available bandwidth leading to disruption of service. To deploy high-speed transport protocols on the Internet, such unfair situations must be improved. In this paper, therefore, we propose a method to improve fairness, called Kyushu-TCP (KTCP), which introduces a non-aggressive period in the congestion avoidance phase to give other standard TCP flows more chances of increasing their transmission rates. This method improves fairness in terms of the throughput by estimating the stably available bandwidth-delay product and adjusting its transmission rate based on this estimation. We show the effectiveness of the proposed method through simulations.
Yuki SAKAI Masato UCHIDA Masato TSURU Yuji OIE
A basic and inevitable problem in estimating flow duration distribution arises from "censoring" (i.e., cutting off) the observed flow duration because of a finite measurement period. We extended the Kaplan-Meier method, which is used in the survival analysis field, and applied it to recover information on the flow duration distribution that was lost due to censoring. We show that the flow duration distribution from a short period of actual traffic data with censoring that was estimated using a Kaplan-Meier-based method can approximate well the flow duration distribution calculated from a sufficiently long period of actual traffic data.
Nguyen VIET HA Kazumi KUMAZOE Masato TSURU
In general, Transmission Control Protocol (TCP), e.g., TCP NewReno, considers all losses to be a sign of congestion. It decreases the sending rate whenever a loss is detected. Integrating the network coding (NC) into protocol stack and making it cooperate with TCP (TCP/NC) would provide the benefit of masking packet losses in lossy networks, e.g., wireless networks. TCP/NC complements the packet loss recovery capability without retransmission at a sink by sending the redundant combination packets which are encoded at the source. However, TCP/NC is less effective under heavy and bursty loss which often occurs in fast fading channel because the retransmission mechanism of the TCP/NC entirely relies on the TCP layer. Our solution is TCP/NC with enhanced retransmission (TCP/NCwER), for which a new retransmission mechanism is developed to retransmit more than one lost packet quickly and efficiently, to allow encoding the retransmitted packets for reducing the repeated losses, and to handle the dependent combination packets for avoiding the decoding failure. We implement and test our proposal in Network Simulator 3. The results show that TCP/NCwER overcomes the deficiencies of the original TCP/NC and improves the TCP goodput under both random loss and burst loss channels.
Masayoshi SHIMAMURA Takeshi IKENAGA Masato TSURU
The explosive growth of Internet usage has caused problems for the current Internet in terms of traffic congestion within networks and performance degradation of end-to-end flows. Therefore, a reconsideration of the current Internet has begun and is being actively discussed worldwide with the goals of enabling efficient share of limited network resources (i.e., the link bandwidth) and improved performance. To directly address the inefficiency of TCP's congestion mitigation solely on the end-to-end basis, in this paper we propose an adaptive split connection scheme on advanced relay nodes; this scheme dynamically splits end-to-end TCP connections on the basis of congestion status in output links. Through simulation evaluations, we examine the effectiveness and potential of the proposed scheme.
Masayoshi SHIMAMURA Hiroaki YAMANAKA Akira NAGATA Katsuyoshi IIDA Eiji KAWAI Masato TSURU
Network virtualization environments (NVEs) are emerging to meet the increasing diversity of demands by Internet users where a virtual network (VN) can be constructed to accommodate each specific application service. In the future Internet, diverse service providers (SPs) will provide application services on their own VNs running across diverse infrastructure providers (InPs) that provide physical resources in an NVE. To realize both efficient resource utilization and good QoS of each individual service in such environments, SPs should perform adaptive control on network and computational resources in dynamic and competitive resource sharing, instead of explicit and sufficient reservation of physical resources for their VNs. On the other hand, two novel concepts, software-defined networking (SDN) and network function virtualization (NFV), have emerged to facilitate the efficient use of network and computational resources, flexible provisioning, network programmability, unified management, etc., which enable us to implement adaptive resource control. In this paper, therefore, we propose an architectural design of network orchestration for enabling SPs to maintain QoS of their applications aggressively by means of resource control on their VNs efficiently, by introducing virtual network provider (VNP) between InPs and SPs as 3-tier model, and by integrating SDN and NFV functionalities into NVE framework. We define new north-bound interfaces (NBIs) for resource requests, resource upgrades, resource programming, and alert notifications while using the standard OpenFlow interfaces for resource control on users' traffic flows. The feasibility of the proposed architecture is demonstrated through network experiments using a prototype implementation and a sample application service on nation-wide testbed networks, the JGN-X and RISE.
Kazumi KUMAZOE Masato TSURU Yuji OIE
The performance of a real-time networked application can be drastically affected by delays in packets traversing the network. Some real-time applications impose limits for acceptable network delay, and so a packet which is delayed longer than the limit before arriving at its destination is worthless to the flow to which the packet belongs. Not only that, but the rejected packet is also damaging to the quality of other flows in the network, because it may increase the queuing delay for other packets. Therefore, this paper proposes an adaptive scheme using two mechanisms, in which packets experiencing too great a delay are discarded at intermediate nodes based on the delay limit for the application and the delay experienced by each packet. This earlier discarding of packets is expected to improve the overall delay performance of real-time flows competing for network resources when the network is congested. An extensive simulation is conducted, and the results show that the scheme has great potential in improving the delay performance of real-time traffic in both homogeneous and heterogeneous environments in terms of traffic volume and application delay requirements.