Noriaki KAMIYAMA Ryoichi KAWAHARA Tatsuya MORI Haruhisa HASEGAWA
In Video on Demand (VoD) services, the demand for content items greatly changes daily over the course of the day. Because service providers are required to maintain a stable service during peak hours, they need to design system resources on the basis of peak demand time, so reducing the server load at peak times is important. To reduce the peak load of a content server, we propose to multicast popular content items to all users independently of actual requests as well as providing on-demand unicast delivery. With this solution, however, the hit ratio of pre-distributed content items is small, and large-capacity storage is required at each set-top box (STB). We can expect to cope with this problem by limiting the number of pre-distributed content items or clustering users based on their viewing histories. We evaluated the effect of these techniques by using actual VoD access log data. We also evaluated the total cost of the multicast pre-distribution VoD system with the proposed two techniques.
Yasuhiro IKEDA Ryoichi KAWAHARA Noriaki KAMIYAMA Tatsuaki KIMURA Tatsuya MORI
We analyze measured traffic data to investigate the characteristics of TCP quality metrics such as packet retransmission rate, roundtrip time (RTT), and throughput of connections classified by their type (client-server (C/S) or peer-to-peer (P2P)), or by the location of the connection host (domestic or overseas). Our findings are as follows. (i) The TCP quality metrics of the measured traffic data are not necessarily consistent with a theoretical formula proposed in a previous study. However, the average RTT and retransmission rate are negatively correlated with the throughput, which is similar to this formula. Furthermore, the maximum idle time, which is defined as the maximum length of the packet interarrival times, is negatively correlated with throughput. (ii) Each TCP quality metric of C/S connections is higher than that of P2P connections. Here “higher quality” means that either the throughput is higher, or the other TCP quality metrics lead to higher throughput; for example the average RTT is lower or the retransmission rate is lower. Specifically, the median throughput of C/S connections is 2.5 times higher than that of P2P connections in the incoming direction of domestic traffic. (iii) The characteristics of TCP quality metrics depend on the location of the host of the TCP connection. There are cases in which overseas servers might use a different TCP congestion control scheme. Even if we eliminate these servers, there is still a difference in the degree of impact the average RTT has on the throughput between domestic and overseas traffic. One reason for this is thought to be the difference in the maximum idle time, and another is the fact that congestion levels of these types of traffic differ, even if their average RTTs are the same.
Noriaki KAMIYAMA Tatsuya MORI Ryoichi KAWAHARA Haruhisa HASEGAWA
Recently, the number of users downloading video content on the Internet has dramatically increased, and it is highly anticipated that downloading huge size, rich content such as movie files will become a popular use of the Internet in the near future. The transmission bandwidth consumed by delivering rich content is enormous, so it is urgent for ISPs to design an efficient delivery system that minimizes the amount of network resources consumed. To deliver web content efficiently, a content delivery network (CDN) is often used. CDN providers collocate a huge number of servers within multiple ISPs without being informed of detailed network information, i.e., network topologies, from ISPs. Minimizing the amount of network resources consumed is difficult because a CDN provider selects a server for each request based on only rough estimates of response time. Therefore, an ordinary CDN is not suited for delivering rich content. P2P-based delivery systems are becoming popular as scalable delivery systems. However, by using a P2P-based system, we still cannot obtain the ideal delivery pattern that is optimal for ISPs because the server locations depend on users behaving selfishly. To provide rich content to users economically and efficiently, an ISP itself should optimally provide servers with huge storage capacities at a limited number of locations within its network. In this paper, we investigate the content deployment method, the content delivery process, and the server allocation method that are desirable for this ISP-operated CDN. Moreover, we evaluate the effectiveness of the ISP-operated CDN using the actual network topologies of commercial ISPs.
Load balancing among multiple mirror servers located at distributed positions in the network is a key technique for content delivery services. For bandwidth allocated services, we consider how to select a suitable server from several candidates containing the same content at the time of a request. We propose limiting the candidates in advance and selecting a server from the limited set of servers in a round-robin fashion. The server sets that minimize the variance of the link load are derived using a greedy method for a given network topology and service demand. Through numerical evaluation, we demonstrate that the proposed method is superior to previous methods.
Broadcast data delivery is attractive for large-size data distribution where a large user community is connected to a server through a network. It is important to consider a broadcast scheduling method which minimizes the average response time. The scheme should also guarantee the expected waiting time at the time of request. In this paper, we propose a method which divides all titles into several groups and assigns FIFO to each group. The proposed method can guarantee the waiting time for each user at his request, and is superior to FIFO (in high load) and a fixed allocation method (in low load).
Nozomu KATAYAMA Takeshi FUJIMURA Hiroyoshi MIWA Noriaki KAMIYAMA Haruhisa HASEGAWA Hideaki YOSHINO
When a link or node fails in a network, the affected flows are automatically rerouted. This increases the hop counts of the flows, which can drastically degrade network performance. Keeping the hop lengths as stable as possible, i.e., minimizing the difference in hop length between the original flow and the rerouted flow is important for network reliability. Therefore, network service providers need a method for designing networks that stabilizes the flow hop length and maintains connectivity during a link or node failure with limited investment cost. First, we formulate the network design problem used for determining the set of links to be added that satisfies the required constraints on flow hop length stability, connectivity, and node degree. Next, we prove that this problem is NP-complete and present two approximation algorithms for the optimization problem so as to minimize the number of links added. Evaluation of the performance of these algorithms by using 39 backbone networks of commercial ISPs and networks generated by two well-known models showed that the proposed algorithms provide effective solutions in sufficiently short computation time.
Naoya MAKI Takayuki NISHIO Ryoichi SHINKUMA Tatsuya MORI Noriaki KAMIYAMA Ryoichi KAWAHARA Tatsuro TAKAHASHI
In content services where people purchase and download large-volume contents, minimizing network traffic is crucial for the service provider and the network operator since they want to lower the cost charged for bandwidth and the cost for network infrastructure, respectively. Traffic localization is an effective way of reducing network traffic. Network traffic is localized when a client can obtain the requested content files from other a near-by altruistic client instead of the source servers. The concept of the peer-assisted content distribution network (CDN) can reduce the overall traffic with this mechanism and enable service providers to minimize traffic without deploying or borrowing distributed storage. To localize traffic effectively, content files that are likely to be requested by many clients should be cached locally. This paper presents a novel traffic engineering scheme for peer-assisted CDN models. Its key idea is to control the behavior of clients by using content-oriented incentive mechanism. This approach enables us to optimize traffic flows by letting altruistic clients download content files that are most likely contributed to localizing traffic among clients. In order to let altruistic clients request the desired files, we combine content files while keeping the price equal to the one for a single content. This paper presents a solution for optimizing the selection of content files to be combined so that cross traffic in a network is minimized. We also give a model for analyzing the upper-bound performance and the numerical results.
Yousuke TAKAHASHI Keisuke ISHIBASHI Masayuki TSUJINO Noriaki KAMIYAMA Kohei SHIOMOTO Tatsuya OTOSHI Yuichi OHSITA Masayuki MURATA
To efficiently use network resources, internet service providers need to conduct traffic engineering that dynamically controls traffic routes to accommodate traffic change with limited network resources. The performance of traffic engineering (TE) depends on the accuracy of traffic prediction. However, the size of traffic change has been drastically increasing in recent years due to the growth in various types of network services, which has made traffic prediction difficult. Our approach to tackle this issue is to separate traffic into predictable and unpredictable parts and to apply different control policies. However, there are two challenges to achieving this: dynamically separating traffic according to predictability and dynamically controlling routes for each separated traffic part. In this paper, we propose a macroflow-based TE scheme that uses different routing policies in accordance with traffic predictability. We also propose a traffic-separation algorithm based on real-time traffic analysis and a framework for controlling separated traffic with software-defined networking technology, particularly OpenFlow. An evaluation of actual traffic measured in an Internet2 network shows that compared with current TE schemes the proposed scheme can reduce the maximum link load by 34% (at the most congested time) and the average link load by an average of 11%.
With the popularity of smart devices, mobile crowdsensing, in which the crowdsensing platform gathers useful data from users of smart devices, e.g., smartphones, has become a prevalent paradigm. Various incentive mechanisms have been extensively adopted for the crowdsensing platform to incentivize users of smart devices to offer sensing data. Existing works have concentrated on rewarding smart-device users for their short term effort to provide data without considering the long-term factors of smart-device users and the quality of data. Our previous work has considered the quality of data of smart-device users by incorporating the long-term reputation of smart-device users. However, our previous work only considered a quality maximization problem with budget constraints on one location. In this paper, multiple locations are considered. Stackelberg game is utilized to solve a two-stage optimization problem. In the first stage, the crowdsensing platform allocates the budget to different locations and sets price as incentives for users to maximize the total data quality. In the second stage, the users make efforts to provide data to maximize its utility. Extensive numerical simulations are conducted to evaluate proposed algorithm.
Network topology significantly affects network cost, path length, link load distribution, and reliability, so we need to consider multiple criteria with different units simultaneously when designing a network's topology. The analytic hierarchy process (AHP) is a technique of balancing multiple criteria in order to reach a rational decision. Using AHP, we can reflect the relative importance of each criterion on the evaluation result; therefore, we have applied it to network topology evaluation in past research. When evaluating network topologies using AHP, we need to construct the set of topology candidates prior to the evaluation. However, the time required to construct this set greatly increases as the network size grows. In this paper, we propose applying a binary partition approach for constructing a topology candidate set with dramatically reduced calculation time. To reduce the calculation time, we introduce an upper limit for the total link length. Although the results of AHP are affected by introducing the upper limit of the total link length, we show that desirable topologies are still selected in AHP.
Ryoichi KAWAHARA Tatsuya MORI Keisuke ISHIBASHI Noriaki KAMIYAMA Hideaki YOSHINO
Managing the performance at the flow level through traffic measurement is crucial for effective network management. With the rapid rise in link speeds, collecting all packets has become difficult, so packet sampling has been attracting attention as a scalable means of measuring flow statistics. In this paper, we firstly propose a method of estimating TCP flow rates of sampled flows through packet sampling, and then develop a method of detecting performance degradation at the TCP flow level from the estimated flow rates. In the method of estimating flow rates, we use sequence numbers of sampled packets, which make it possible to improve markedly the accuracy of estimating the flow rates of sampled flows. Using both an analytical model and measurement data, we show that this method gives accurate estimations. We also show that, by observing the estimated rates of sampled flows, we can detect TCP performance degradation. The method of detecting performance degradation is based on the following two findings: (i) sampled flows tend to have high flow-rates and (ii) when a link becomes congested, the performance of high-rate flows becomes degraded first. These characteristics indicate that sampled flows are sensitive to congestion, so we can detect performance degradation of flows that are sensitive to congestion by observing the rate of sampled flows. We also show the effectiveness of our method using measurement data.
Hideki TODE Noriaki KAMIYAMA Chikara OHTA Miki YAMAMOTO Hiromi OKADA
A new transfer mode and a switching architecture which can support loss free and no delay jitter service class with shorter switching delay compared with "stop and go queueing scheme" is proposed. This scheme combines ATM scheme with hierarchical STM framing concept.
Noriaki KAMIYAMA Miki YAMAMOTO Hiromasa IKEDA
The message level performance of error controls in data communication on ATM network is analyzed. Three layers, "a cell"(a unit of transmission), "a block"(a unit of error controls) and "a message"(a unit of transmission of user level) are considered. The error controls treated in this paper are GBN (Go-Back-N) and FEC+GBN. The cell loss process is assumed to be the two state Markov chain considering the cell loss process in ATM networks. Numerical results show that (1) the improvement of the message forwarding delay is saturated in some environments when the interface rate becomes high, (2) FEC is efficient when the burstiness of the cell loss process is small, the message length is large and the interface rate is high.
All-optical switching is of considerable interest, since it enables the construction of large-capacity networks with protocol- and bit-rate-independent transmission. In this paper, we determine the most desirable of three all-optical architectures for a backbone network, by comparing the following architectures: the wavelength-routed network, the slotted wavelength-routed network, and the optical burst switching network. After proposing an optical path accommodation algorithm that minimizes the total fiber length, we evaluate the total network cost in order to compare the availability of the first two architectures. We then compare the architectures in terms of the burst blocking probability in order to clarify the effectiveness of the third architecture.
Noriaki KAMIYAMA Tatsuya MORI Ryoichi KAWAHARA Shigeaki HARADA
We have proposed a method of identifying superspreaders by flow sampling and a method of filtering legitimate hosts from the identified superspreaders using a white list. However, the problem of how to optimally set parameters of φ, the measurement period length, m*, the identification threshold of the flow count m within φ, and H*, the identification probability for hosts with m=m*, remained unsolved. These three parameters seriously impact the ability to identify the spread of infection. Our contributions in this work are two-fold: (1) we propose a method of optimally designing these three parameters to satisfy the condition that the ratio of the number of active worm-infected hosts divided by the number of all vulnerable hosts is bound by a given upper-limit during the time T required to develop a patch or an anti-worm vaccine, and (2) the proposed method can optimize the identification accuracy of worm-infected hosts by maximally using a limited amount of memory resource of monitors.
Noriaki KAMIYAMA Yousuke TAKAHASHI Keisuke ISHIBASHI Kohei SHIOMOTO Tatsuya OTOSHI Yuichi OHSITA Masayuki MURATA
Although the use of software-defined networking (SDN) enables routes of packets to be controlled with finer granularity (down to the individual flow level) by using traffic engineering (TE) and thereby enables better balancing of the link loads, the corresponding increase in the number of states that need to be managed at routers and controller is problematic in large-scale networks. Aggregating flows into macro flows and assigning routes by macro flow should be an effective approach to solving this problem. However, when macro flows are constructed as TE targets, variations of traffic rates in each macro flow should be minimized to improve route stability. We propose two methods for generating macro flows: one is based on a greedy algorithm that minimizes the variation in rates, and the other clusters micro flows with similar traffic variation patterns into groups and optimizes the traffic ratio of extracted from each cluster to aggregate into each macro flow. Evaluation using traffic demand matrixes for 48 hours of Internet2 traffic demonstrated that the proposed methods can reduce the number of TE targets to about 1/50 ∼ 1/400 without degrading the link-load balancing effect of TE.
Tatsuya OTOSHI Yuichi OHSITA Masayuki MURATA Yousuke TAKAHASHI Noriaki KAMIYAMA Keisuke ISHIBASHI Kohei SHIOMOTO Tomoaki HASHIMOTO
In recent years, the time variation of Internet traffic has increased due to the growth of streaming and cloud services. Backbone networks must accommodate such traffic without congestion. Traffic engineering with traffic prediction is one approach to stably accommodating time-varying traffic. In this approach, routes are calculated from predicted traffic to avoid congestion, but predictions may include errors that cause congestion. We propose prediction-based traffic engineering that is robust against prediction errors. To achieve robust control, our method uses model predictive control, a process control method based on prediction of system dynamics. Routes are calculated so that future congestion is avoided without sudden route changes. We apply calculated routes for the next time slot, and observe traffic. Using the newly observed traffic, we again predict traffic and re-calculate the routes. Repeating these steps mitigates the impact of prediction errors, because traffic predictions are corrected in each time slot. Through simulations using backbone network traffic traces, we demonstrate that our method can avoid the congestion that the other methods cannot.
Noriaki KAMIYAMA Ryoichi KAWAHARA Tatsuya MORI Haruhisa HASEGAWA
The number of users of video on demand (VoD) services has increased dramatically. In VoD services, the demand for content items changes greatly hour to hour. Because service providers are required to maintain a stable service during peak hours, they need to design system resources based on the demand at peak time, so reducing the server load at this time is important. Although multicast delivery, in which multiple users requesting the same content item are supported by one delivery session, is effective for suppressing the server load during peak hours, user response times can increase greatly. A peer-to-peer-assisted delivery system, in which users download content items from other users watching the same content item, is also effective for reducing server load. However, system performance depends on selfish user behavior, and optimizing the usage of system resources is difficult. Moreover, complex operation, i.e., switching the delivery multicast tree or source peers, is necessary to support video cassette recorder (VCR) operation, e.g., fast forward, rewind, and pause. In this paper, we propose to reduce server load without increasing user response time by multicasting popular content items to all users independent of actual requests as well as providing on-demand unicast delivery. Through a numerical evaluation that uses actual VoD access log data, we clarify the effectiveness of the proposed method.
Ryoichi KAWAHARA Tatsuya MORI Takeshi YADA Noriaki KAMIYAMA
We investigate the impact of traffic on the performance of large-scale NAT (LSN), since it has been attracting attention as a means of better utilizing the limited number of global IPv4 addresses. We focus on the number of active flows because they drive up the LSN memory requirements in two ways; more flows must be held in LSN memory, and more global IPv4 addresses must be prepared. Through traffic measurement data analysis, we found that more than 1% of hosts generated more than 100 TCP flows or 486 UDP flows at the same time, and on average, there were 1.43-3.99 active TCP flows per host, when the inactive timer used to clear the flow state from a flow table was set to 15 s. When the timer is changed from 15 s to 10 min, the number of active flows increases more than tenfold. We also investigate how to reduce the above impact on LSN in terms of saving memory space and accommodating more users for each global IPv4 address. We show that to save memory space, regulating network anomalies can reduce the number of active TCP flows on an LSN by a maximum of 48.3% and by 29.6% on average. We also discuss the applicability of a batch flow-arrival model for estimating the variation in the number of active flows, when taking into account that the variation is needed to prepare an appropriate memory space. One way to allow each global IPv4 address to accommodate more users is to better utilize destination IP address information when mapping a source IP address from a private address to a global IPv4 address. This can effectively reduce the required number of global IPv4 addresses by 85.9% for TCP traffic and 91.9% for UDP traffic on average.
Noriaki KAMIYAMA Keisuke ISHIBASHI Yoko HOSHIAI
During a disaster, users will not be able to communicate with their families and friends using mobile terminals, e.g., smartphones, in many cases due to failures of base stations and backhaul of cellular networks. Even when cellular networks normally operate without failure, they will become seriously congested due to dramatically increased traffic demand. To solve these problems, device-to-device (D2D) communications, in which mobile terminals directly communicate without cellular networks, have been investigated. Multi-hop D2D communication using multiple mobile terminals as relay nodes will be effective in maintaining connectivity during a disaster. It is preferable to estimate the success probability of multi-hop D2D communication by using a simple method that offers optimal parameter control, e.g., the ratio of mobile terminals using D2D communications and the maximum hop length. Moreover, when evaluating the reachability of multi-hop D2D communication, we need to consider the evacuation behavior during a disaster because success probability depends on the geographical distribution of mobile terminals. Therefore, in this paper, we derive a formula for estimating the success probability of multi-hop D2D communication in a simple manner and analyze its reachability using a multi-agent simulation that reproduces the evacuation behavior expected during an earthquake in Tokyo Shinjuku Ward.