The search functionality is under construction.

Author Search Result

[Author] Chuck YOO(10hit)

1-10hit
  • Momentary Recovery Algorithm: A New Look at the Traditional Problem of TCP

    Jae-Hyun HWANG  See-Hwan YOO  Chuck YOO  

     
    PAPER-Network

      Vol:
    E92-B No:12
      Page(s):
    3765-3773

    Traditional TCP has a good congestion control strategy that adapts its sending rate in accordance with network congestion. In addition, a fast recovery algorithm can help TCP achieve better throughput by responding to temporary network congestion well. However, if multiple packet losses occur, the time to enter congestion avoidance phase would be delayed due to the long recovery time. Moreover, during the recovery phase, TCP freezes congestion window size until all lost packets are recovered, and this can make recovery time much longer resulting in performance degradation. To mitigate such recovery overhead, we propose Momentary recovery algorithm that recovers packet loss without an extra recovery phase. As other TCP and variants, our algorithm also halves the congestion window size when packet drop is detected, but it performs congestion avoidance phase immediately as if loss recovery is completed. For lost packets, TCP sender transmits them along with normal packets as long as congestion window permits rather than performs fast retransmission. In this manner, we can eliminate recovery overhead efficiently and reach steady state momentarily after network congestion. Finally, we provide a simulation based study on TCP recovery behaviors and confirm that our Momentary recovery algorithm always shows better performance compared with NewReno, SACK, and FACK.

  • Deduplication TAR Scheme Using User-Level File System

    Young-Woong KO  Min-Ja KIM  Jeong-Gun LEE  Chuck YOO  

     
    LETTER-Data Engineering, Web Information Systems

      Vol:
    E97-D No:8
      Page(s):
    2174-2177

    In this paper, we propose a new user-level file system to support block relocation by modifying the file allocation table without actual data copying. The key idea of the proposed system is to provide the block insertion and deletion function for file manipulation. This approach can be used very effectively for block-aligned file modification applications such as a compress utility and a TAR archival system. To show the usefulness of the proposed file system, we adapted the new functionality to TAR application by modifying TAR file to support an efficient sub-file management scheme. Experiment results show that the proposed system can significantly reduce the file I/O overhead and improve the I/O performance of a file system.

  • Shared Page Table: Sharing of Virtual Memory Resources

    Young-Woong KO  Chuck YOO  

     
    PAPER-Software Systems

      Vol:
    E86-D No:1
      Page(s):
    45-55

    Traditionally, UNIX has been weak in data sharing. By data sharing, we mean that multiple cooperative processes concurrently access and update the same set of data. As the degree of sharing (the number of cooperative processes) increases, the existing UNIX virtual memory systems run into page table thrashing, which causes a major performance bottleneck. Once page table thrashing occurs, UNIX performs miserably regardless of the hardware platforms it is running on. This is a critical problem because UNIX is increasingly used in environments such as banking that require intensive data sharing. We consider several alternatives to avoid page table thrashing, and propose a solution of which the main idea is to share page tables in virtual memory. Extensive experiments have been carried out with real workloads, and the results show that the shared page table solution avoids the page table thrashing and improves the performance of UNIX by an order of magnitude.

  • Cache-Aware Virtual Machine Scheduling on Multi-Core Architecture

    Cheol-Ho HONG  Young-Pil KIM  Seehwan YOO  Chi-Young LEE  Chuck YOO  

     
    PAPER-Software System

      Vol:
    E95-D No:10
      Page(s):
    2377-2392

    Facing practical limits to increasing processor frequencies, manufacturers have resorted to multi-core designs in their commercial products. In multi-core implementations, cores in a physical package share the last-level caches to improve inter-core communication. To efficiently exploit this facility, operating systems must employ cache-aware schedulers. Unfortunately, virtualization software, which is a foundation technology of cloud computing, is not yet cache-aware or does not fully exploit the locality of the last-level caches. In this paper, we propose a cache-aware virtual machine scheduler for multi-core architectures. The proposed scheduler exploits the locality of the last-level caches to improve the performance of concurrent applications running on virtual machines. For this purpose, we provide a space-partitioning algorithm that migrates and clusters communicating virtual CPUs (VCPUs) in the same cache domain. Second, we provide a time-partitioning algorithm that co-schedules or schedules in sequence clustered VCPUs. Finally, we present a theoretical analysis that proves our scheduling algorithm is more efficient in supporting concurrent applications than the default credit scheduler in Xen. We implemented our virtual machine scheduler in the recent Xen hypervisor with para-virtualized Linux-based operating systems. We show that our approach can improve performance of concurrent virtual machines by up to 19% compared to the credit scheduler.

  • Predictable Packet Latency in Xen-ARM

    Seehwan YOO  Kuenhwan KWAK  Jaehyun JO  Chuck YOO  

     
    PAPER-Software System

      Vol:
    E95-D No:11
      Page(s):
    2613-2623

    In this paper, we address latency issue in Xen-ARM virtual machines. Despite the advantages of virtualization in mobile systems, the current Xen-ARM is difficult to apply to mobile devices because it has unpredictable I/O latency. This paper analyzes the latency of incoming packet handling in Xen-ARM, and presents how virtualization affects the latency in detail. To make the latency predictable, firstly, we modify Xen-ARM scheduler so that the driver domain can be promptly scheduled by the hypervisor. Secondly, we introduce additional paravirtualization of guest OS that minimizes non-preemptible code path. With our enhancements, 99% of incoming packets are predictably handled within one millisecond at the destined guest OS, which is a feasible time bound for most soft real-time applications.

  • FAMH: Fast Inter-Subnet Multicast Handoff Method for IEEE 802.11 WLANs

    Sang-Seon BYUN  Chuck YOO  

     
    PAPER-Network

      Vol:
    E88-B No:8
      Page(s):
    3365-3374

    When a mobile node that subscribes to one or more multicast groups moves to another subnet, it is essential to provide a network level multicast handoff mechanism. Previous multicast handoff schemes are based on Mobile IP. However it is known that the Mobile IP is not adequate to interactive multimedia applications such as voice over IP or video conferencing due to its large handoff delay. Additionally, few researches have paid attentions on multicast handoff in infrastructure-mode WLAN environment. This paper proposes a fast inter-subnet multicast handoff method in Mobile IP based infrastructure-mode IEEE 802.11 WLAN environment. We introduce a dedicated Multicast Access Point (MAP) that works with an access points specified in standard IEEE 802.11 WLAN in order to alleviate disruption of receiving multicast datagram. Unlike previous research, our scheme does not modify Mobile IP specifications. MAP detects the completion of link-layer handoff, sends unsolicited IGMP Membership report to its local router on behalf of the mobile station and performs unicast tunneling. We evaluate the proposed method using ns-2 simulation. The simulation result shows that the proposed method can reduce the disruption period due to inter-subnet multicast handoff to about 1/12 and the packet loss rate can be reduced to about 1/4 over 20-size multicast group compared with the standard Mobile IP based IEEE 802.11 WLAN.

  • Asynchronous UDP

    Chuck YOO  Hyun-Wook JIN  Soon-Cheol KWON  

     
    PAPER-Network

      Vol:
    E84-B No:12
      Page(s):
    3243-3251

    Network bandwidth has rapidly increased, and high-speed networks have come into wide use, but overheads in legacy network protocols prevent the bandwidth of networks from being fully utilized. Even UDP, which is far lighter than TCP, has been a bottleneck on high-speed networks due to its overhead. This overhead mainly occurs from per-byte overhead such as data copy and checksum. Previous works have tried to minimize the per-byte overhead but are not easily applicable because of their constraints. The goal of this paper is to investigate how to fully utilize the bandwidth of high-speed networks. We focus on eliminating data copy because other major per-byte overhead, such as checksum, can be minimized through hardware. This paper introduces a new concept called Asynchronous UDP and shows that it eliminates data copy completely. We implement Asynchronous UDP on Linux with ATM and present the experiment results. The experiments show that Asynchronous UDP is much faster than the existing highly optimized UDP by 133% over ATM. In addition to the performance improvement, additional advantages of Asynchronous UDP include: (1) It does not have constraints that previous attempts had, such as copy-on-write and page-alignment; (2) It uses much less CPU cycles (up to 1/3) so that the resources are available for more connections and/or other useful computations; (3) It gives more flexibility and parallelism to applications because applications do not have to wait for the completion of network I/O but can decide when to check the completion.

  • A Remote Execution Model for Mobile Code

    Seung-Hyub JEON  Min-Hui LIM  Chuck YOO  

     
    PAPER-Computer Systems

      Vol:
    E83-D No:11
      Page(s):
    1924-1930

    The execution model of mobile code inherits from traditional remote execution model such as telnet that needs two conditions. First, the proper program must exist in advance in the remote system. Second, there should be a process in the remote system waiting for requests. Therefore mobile code also bears the same conditions in order to be executed in a remote system. But these conditions constrain an important aspect of mobile code, which is the dynamic extension of system functionality. In this paper we propose a new approach, named Function Message that enables remote execution without these two conditions. Therefore, Function Message makes it easy and natural for mobile codes to extend system functionality dynamically. This paper describes the design of Function Message and implementation on Linux. We measure the overhead of Function Message and verify its usefulness with experimental results. On the ATM network, Function Message can be about five times faster than the traditional remote execution model based on exec().

  • Synchronization-Aware Virtual Machine Scheduling for Parallel Applications in Xen

    Cheol-Ho HONG  Chuck YOO  

     
    LETTER

      Vol:
    E96-D No:12
      Page(s):
    2720-2723

    In this paper, we propose a synchronization-aware VM scheduler for parallel applications in Xen. The proposed scheduler prevents threads from waiting for a significant amount of time during synchronization. For this purpose, we propose an identification scheme that can identify the threads that have awaited other threads for a long time. In this scheme, a detection module that can infer the internal status of guest OSs was developed. We also present a scheduling policy that can accelerate bottlenecks of concurrent VMs. We implemented our VM scheduler in the recent Xen hypervisor with para-virtualized Linux-based operating systems. We show that our approach can improve the performance of concurrent VMs by up to 43% as compared to the credit scheduler.

  • Stride Static Chunking Algorithm for Deduplication System

    Young-Woong KO  Ho-Min JUNG  Wan-Yeon LEE  Min-Ja KIM  Chuck YOO  

     
    LETTER-Computer Systems

      Vol:
    E96-D No:7
      Page(s):
    1544-1547

    In this paper, we propose a stride static chunking deduplication algorithm using a hybrid approach that exploits the advantages of static chunking and byte-shift chunking algorithm. The key contribution of our approach is to reduce the computation time and enhance deduplication performance. We assume that duplicated data blocks are generally gathered into groups; thus, if we find one duplicated data block using byte-shift, then we can find subsequent data blocks with the static chunking approach. Experimental results show that stride static chunking algorithm gives significant benefits over static chunking, byte-shift chunking and variable-length chunking algorithm, particularly for reducing processing time and storage space.