1-6hit |
Eiji OKI Ryoma KANEKO Nattapong KITSUWAN Takashi KURIMOTO Shigeo URUSHIDANI
Cost-effective cloud storage services are attracting users with their convenience, but there is a trade-off between service availability and usage cost. We develop two cloud provider selection models for cloud storage services to minimize the total cost of usage. The models select multiple cloud providers to meet the user requirements while considering unavailability. The first model, called a user-copy (UC) model, allows the selection of multiple cloud providers, where the user copies its data to multiple providers. In addition to the user copy function of the UC model, the second model, which is called a user and cloud-provider copy (UCC) model, allows cloud providers to make copies of the data to deliver them to other cloud providers. The cloud service is available if at least one cloud provider is available. We formulate both models as integer linear programming (ILP) problems. Our performance evaluation observes that both models reduce the total cost of usage, compared to the single cloud provider selection approach. As the cost of bandwidth usage between a user and a cloud provider increases, the UCC model becomes more beneficial than the UC model. We implement the prototype for cloud storage services, and demonstrate our models via Science Information Network 5.
Eiji OKI Nattapong KITSUWAN Roberto ROJAS-CESSA
A three-stage Clos-network switch with input queues is attractive for practical implementation of a large-capacity packet switch. A scheme that configures the first, second, and third stages in that sequence by performing iterative matchings based on random selections is called the staged random scheduling scheme. Despite the usefulness of such a switch, the literature provides no analytical formula that can accurately calculate its throughput. This paper develops a formula to calculate the throughput analysis of the staged random scheduling scheme for one and multiple iterations used in an input-queued Clos-network switch under uniform traffic. This formula can be used to verify simulation models for very large switches. The introduced derivation considers the processes of the selection scheme at each stage of the switch. The derived formula is used in numerical evaluations to show the throughput of large switch sizes. The results show that the staged random scheduling scheme with multiple iterations for a Clos-network switch with VOQs without internal expansion approaches 100% throughput under uniform traffic. Furthermore, evaluations of the derived formulas are used in a practical application to estimate the number of iterations required to achieve 99% throughput for a given switch size. In addition, the staged random scheduling scheme in an input-queued Clos-network switch is modeled and simulated to compare throughput estimations to those obtained with the derived formulas. The simulation results support the correctness of the derived formulas.
Nattapong KITSUWAN Eiji OKI Roberto ROJAS-CESSA
This letter presents a theoretical analysis of the Parallel Iterative Matching (PIM)'s dynamics with multiple iterations used in an input-buffered packet switch. In our approach, by carefully categorizing all unmatched patterns into several representative patterns after each iteration, probabilities of accumulated matched pairs in a recursive manner are successfully obtained. Numerical evaluations of the analytical formulas are performed.
Soudalin KHOUANGVICHIT Nattapong KITSUWAN Eiji OKI
This paper proposes an optimization approach that designs the backup network with the minimum total capacity to protect the primary network from random multiple link failures with link failure probability. In the conventional approach, the routing in the primary network is not considered as a factor in minimizing the total capacity of the backup network. Considering primary routing as a variable when deciding the backup network can reduce the total capacity in the backup network compared to the conventional approach. The optimization problem examined here employs robust optimization to provide probabilistic survivability guarantees for different link capacities in the primary network. The proposed approach formulates the optimization problem as a mixed integer linear programming (MILP) problem with robust optimization. A heuristic implementation is introduced for the proposed approach as the MILP problem cannot be solved in practical time when the network size increases. Numerical results show that the proposed approach can achieve lower total capacity in the backup network than the conventional approach.
Eiji OKI Nattapong KITSUWAN Shunichi TSUNODA Takashi MIYAMURA Akeo MASUDA Kohei SHIOMOTO
This letter proposes a scalable network emulator architecture to support IP optical network management. The network emulator uses the same router interfaces to communicate with the IP optical TE server as the actual IP optical network, and behaves as an actual IP optical network between the interfaces. The network emulator mainly consists of databases and three modules: interface module, resource simulator module, and traffic generator module. To make the network emulator scalable in terms of network size, we employ TCP/IP socket communications between the modules. The proposed network emulator has the benefit that its implementation is not strongly dependent on hardware limitations. We develop a prototype of the network emulator based on the proposed architecture. Our design and experiments show that the proposed architecture is effective.
Ravindra Sandaruwan RANAWEERA Eiji OKI Nattapong KITSUWAN
Apache Hadoop and its ecosystem have become the de facto platform for processing large-scale data, or Big Data, because it hides the complexity of distributed computing, scheduling, and communication while providing fault-tolerance. Cloud-based environments are becoming a popular platform for hosting Hadoop clusters due to their low initial cost and limitless capacity. However, cloud-based Hadoop clusters bring their own challenges due to contradictory design principles. Hadoop is designed on the shared-nothing principle while cloud is based on the concepts of consolidation and resource sharing. Most of Hadoop's features are designed for on-premises data centers where the cluster topology is known. Hadoop depends on the rack assignment of servers (configured by the cluster administrator) to calculate the distance between servers. Hadoop calculates the distance between servers to find the best remote server from which to fetch data from when fetching non-local data. However, public cloud environment providers do not share rack information of virtual servers with their tenants. Lack of rack information of servers may allow Hadoop to fetch data from a remote server that is on the other side of the data center. To overcome this problem, we propose a delay distribution based scheme to find the closest server to fetch non-local data for public cloud-based Hadoop clusters. The proposed scheme bases server selection on the delay distributions between server pairs. Delay distribution is calculated measuring the round-trip time between servers periodically. Our experiments observe that the proposed scheme outperforms conventional Hadoop nearly by 12% in terms of non-local data fetch time. This reduction in data fetch time will lead to a reduction in job run time, especially in real-world multi-user clusters where non-local data fetching can happen frequently.