1-13hit |
Krittin INTHARAWIJITR Katsuyoshi IIDA Hiroyuki KOGA
Attaining extremely low latency service in 5G cellular networks is an important challenge in the communication research field. A higher QoS in the next-generation network could enable several unprecedented services, such as Tactile Internet, Augmented Reality, and Virtual Reality. However, these services will all need support from powerful computational resources provided through cloud computing. Unfortunately, the geolocation of cloud data centers could be insufficient to satisfy the latency aimed for in 5G networks. The physical distance between servers and users will sometimes be too great to enable quick reaction within the service time boundary. The problem of long latency resulting from long communication distances can be solved by Mobile Edge Computing (MEC), though, which places many servers along the edges of networks. MEC can provide shorter communication latency, but total latency consists of both the transmission and the processing times. Always selecting the closest edge server will lead to a longer computing latency in many cases, especially when there is a mass of users around particular edge servers. Therefore, the research studies the effects of both latencies. The communication latency is represented by hop count, and the computation latency is modeled by processor sharing (PS). An optimization model and selection policies are also proposed. Quantitative evaluations using simulations show that selecting a server according to the lowest total latency leads to the best performance, and permitting an over-latency barrier would further improve results.
Suguru YOSHIMIZU Hiroyuki KOGA Katsushi KOUYAMA Masayoshi SHIMAMURA Kazumi KUMAZOE Masato TSURU
With the emergence of bandwidth-greedy application services, high-speed transport protocols are expected to effectively and aggressively use large amounts of bandwidth in current broadband and multimedia networks. However, when high-speed transport protocols compete with other standard TCP flows, they can occupy most of the available bandwidth leading to disruption of service. To deploy high-speed transport protocols on the Internet, such unfair situations must be improved. In this paper, therefore, we propose a method to improve fairness, called Kyushu-TCP (KTCP), which introduces a non-aggressive period in the congestion avoidance phase to give other standard TCP flows more chances of increasing their transmission rates. This method improves fairness in terms of the throughput by estimating the stably available bandwidth-delay product and adjusting its transmission rate based on this estimation. We show the effectiveness of the proposed method through simulations.
Krittin INTHARAWIJITR Katsuyoshi IIDA Hiroyuki KOGA Katsunori YAMAOKA
Most of latency-sensitive mobile applications depend on computational resources provided by a cloud computing service. The problem of relying on cloud computing is that, sometimes, the physical locations of cloud servers are distant from mobile users and the communication latency is long. As a result, the concept of distributed cloud service, called mobile edge computing (MEC), is being introduced in the 5G network. However, MEC can reduce only the communication latency. The computing latency in MEC must also be considered to satisfy the required total latency of services. In this research, we study the impact of both latencies in MEC architecture with regard to latency-sensitive services. We also consider a centralized model, in which we use a controller to manage flows between users and mobile edge resources to analyze MEC in a practical architecture. Simulations show that the interval and controller latency trigger some blocking and error in the system. However, the permissive system which relaxes latency constraints and chooses an edge server by the lowest total latency can improve the system performance impressively.
Mikiya YOSHIDA Yusuke ITO Yurino SATO Hiroyuki KOGA
Information-centric networking (ICN) provides low-latency content delivery with in-network caching, but delivery latency depends on cache distance from consumers. To reduce delivery latency, a scheme to cluster domains and retain the main popular content in each cluster with a cache distribution range has been proposed, which enables consumers to retrieve content from neighboring clusters/caches. However, when the distribution of content popularity changes, all content caches may not be distributed adequately in a cluster, so consumers cannot retrieve them from nearby caches. We therefore propose a dynamic clustering scheme to adjust the cache distribution range in accordance with the change in content popularity and evaluate the effectiveness of the proposed scheme through simulation.
Yurino SATO Hiroyuki KOGA Takeshi IKENAGA
Packet losses significantly degrade TCP performance in high-latency environments. This is because TCP needs at least one round-trip time (RTT) to recover lost packets. The recovery time will grow longer, especially in high-latency environments. TCP keeps transmission rate low while lost packets are recovered, thereby degrading throughput. To prevent this performance degradation, the number of retransmissions must be kept as low as possible. Therefore, we propose a scheme to apply a technology called “forward error correction” (FEC) to the entire TCP operation in order to improve throughput. Since simply applying FEC might not work effectively, three function, namely, controlling redundancy level and transmission rate, suppressing the return of duplicate ACKs, interleaving redundant packets, were devised. The effectiveness of the proposed scheme was demonstrated by simulation evaluations in high-latency environments.
Peng WANG Hiroyuki KOGA Sho YAMADA Shigeki OBOTE Kenichi KAGOSHIMA Kenji ARAKI
A 2.45-GHz-band small passive radio-frequency identification (RFID) tag consists of a small loop antenna and chip, and its size is several millimeters. Because of the tag's poor impedance-matching characteristic and radiation efficiency, an ordinary reader has difficulty reading it. We propose a new technique for reading the tag that involves installing a square half-wavelength meander-line conductor on the reader as an adapter and placing the adapter in the vicinity of the tag, and verify the effectiveness of the technique by simulation and experiment. Moreover, characteristics of simultaneous read of the small RFID tags by the proposed reading technique are revealed by simulation and experimental results.
Hiroyuki KOGA Yoshiaki HORI Yuji OIE
Over the future Internet, the real time communication generating such as CBR (Constant Bit Rate) traffic will widely spread, whereas the current Internet has no ability to provide QoS (Quality of Service) assurance for real time communication so far. In QoS networks, CBR traffic will have priority for its stringent QoS requirement over non-real time traffic such as TCP connections, which use the unused bandwidth left by CBR connections. Therefore, there is possibility that CBR traffic with priority causes TCP throughput degradation in QoS networks. For this reason, the performance of Tahoe TCP has been examined in that context, but other TCP variants such as Reno TCP, NewReno TCP and TCP with SACK option, which are now very common, have not yet been investigated clearly. In the present research, we will clarify how these TCP variants behave in QoS networks by means of simulations and compare their performance. From the results, SACK TCP can adapt very well to the changing bandwidth available and is very robust against the fluctuation, i.e., burstness, of CBR packet arrival process.
Krittin INTHARAWIJITR Katsuyoshi IIDA Hiroyuki KOGA Katsunori YAMAOKA
The Internet of Things (IoT) with its support for cyber-physical systems (CPS) will provide many latency-sensitive services that require very fast responses from network services. Mobile edge computing (MEC), one of the distributed computing models, is a promising component of the low-latency network architecture. In network architectures with MEC, mobile devices will offload heavy computing tasks to edge servers. There exist numbers of researches about low-latency network architecture with MEC. However, none of the existing researches simultaneously satisfy the followings: (1) guarantee the latency of computing tasks and (2) implement a real system. In this paper, we designed and implemented an MEC based network architecture that guarantees the latency of offloading tasks. More specifically, we first estimate the total latency including computing and communication ones at the centralized node called orchestrator. If the estimated value exceeds the latency requirement, the task will be rejected. We then evaluated its performance in terms of the blocking probability of the tasks. To analyze the results, we compared the performance between obtained from experiments and simulations. Based on the comparisons, we clarified that the computing latency estimation accuracy is a significant factor for this system.
Yusuke ITO Hiroyuki KOGA Katsuyoshi IIDA
Cloud computing, which enables users to enjoy various Internet services provided by data centers (DCs) at anytime and anywhere, has attracted much attention. In cloud computing, however, service quality degrades with user distance from the DC, which is unfair. In this study, we propose a bandwidth allocation scheme based on collectable information to improve fairness and link utilization in DC networks. We have confirmed the effectiveness of this approach through simulation evaluations.
Yurino SATO Yusuke ITO Hiroyuki KOGA
Content-centric networking (CCN) promises efficient content delivery services with in-network caching. However, it cannot utilize cached chunks near users if they are not on the shortest path to the server, and it tends to mostly cache highly popular chunks in a domain. This degrades cache efficiency in obtaining various contents in CCN. Therefore, we propose hash-based cache distribution and search schemes to obtain various contents from nearby nodes and evaluate the effectiveness of this approach through simulation.
Teruaki YOKOYAMA Katsuyoshi IIDA Hiroyuki KOGA Suguru YAMAGUCHI
In this research, we focused on fair bandwidth allocation on the Internet. The Internet provides communication services based on exchanged packets. The bandwidth available for each customer is often fluctuated. Fair bandwidth allocation is an important issue for ISPs to gain customer satisfaction. Static bandwidth allocation allows an exclusive bandwidth for specific traffic. Although it gives communications a QoS guarantee, it requires muany bandwidth resources as known as over-provisioning. In contrast with static control, dynamic control allocates bandwidth resources dynamically. It therefore utilizes bandwidth use more effectively. However, it needs control overhead in monitoring traffic and estimating the optimum allocation. The Transmission Control Protocol, or TCP is the dominant protocol on the Internet. It is also equipped with a traffic-rate-control mechanism. An adaptive bandwidth-allocation mechanism must control traffic that is under TCP control. Rapid feedback makes it possible to gain an advantage over TCP control. In this paper, we propose an Adaptive Bandwidth Allocation (ABA) mechanism as a feedback system for MPLS. Our proposal allows traffic to be regulated adaptively as its own weight value which can be assigned by administrators. The feedback bandwidth allocation in the previous work needs round-trip control delay in collecting network status along the communication path. We call this "round-trip feedback control." Our proposal, called "one-way feedback control," collects network status in half the time of roundtrip delay. We compare the performance of our one-way feedback-based mechanism and traditional round-trip feedback control under a simulation environment. We demonstrate the advantages of our rapid feedback control has using experimental results.
Masayoshi SHIMAMURA Hiroyuki KOGA Takeshi IKENAGA Masato TSURU
Introducing adaptive online data compression at network-internal nodes is considered for alleviating traffic congestion on the network. In this paper, we assume that advanced relay nodes, which possess both a relay function (network resource) and a processing function (computational and storage resources), are placed inside the network, and we propose an adaptive online lossless packet compression scheme utilized at these nodes. This scheme selectively compresses a packet according to its waiting time in the queue during congestion. Through preliminary investigation using actual traffic datasets, we investigate the compression ratio and processing time of packet-by-packet compression in actual network environments. Then, by means of computer simulations, we show that the proposed scheme reduces the packet delay time and discard rate and investigate factors necessary in achieving efficient packet relay.
Shigeru KASHIHARA Katsuyoshi IIDA Hiroyuki KOGA Youki KADOBAYASHI Suguru YAMAGUCHI
In future mobile networks, new technologies will be needed to enable a mobile host to move across heterogeneous wireless access networks without disruption of the connection. In the past, many researchers have studied handover in such IP networks. In almost all cases, special network devices are needed to maintain the host's mobility. Moreover, a host cannot move across heterogeneous wireless access networks without degradation of the goodput for real-time communication, although a mobile host with multiple network interfaces can connect to multiple wireless access networks. For these reasons, we consider that a mobile host needs to manage seamless handover on an end-to-end basis. In this paper, we propose a multi-path transmission algorithm for end-to-end seamless handover. The main purpose of this algorithm is to improve the goodput during handover by sending the same packets along multiple paths, minimizing unnecessary consumption of network resources. We evaluate our algorithm through simulations and show that a mobile host gains a better goodput.