The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Chanik PARK(7hit)

1-7hit
  • Double Indirect Access: Efficient Peer-to-Peer Object Lookup Protocol in Location-Aware Mobile Ad Hoc Networks

    Daewoong KIM  Chanik PARK  

     
    PAPER

      Vol:
    E90-B No:4
      Page(s):
    799-808

    Geographic distributed hash table (DHT) protocols are considered to be efficient for P2P object sharing in mobile ad-hoc networks. These protocols assume that the set of pairs, called indexes, should be distributed among nodes according to the following hashing mapping rule: A key hashes into a geographic coordinate, and the corresponding index is stored at the node closest to the key's hash value. Therefore, when a node changes its position, some indexes have to be redistributed to other nodes in order to keep the hashing mapping rule consistent. The overhead of index redistribution may be high enough to impact the normal lookup operation if each node contains a large number of indexes. In this paper, we propose an efficient lookup protocol, called Double Indirect Access, that dispenses with index redistribution to improve lookup performance. The main idea is to determine the mapping from an index to a node not by the node's position, but by the node's static identifier that is obtained by hashing its MAC address into a geographic coordinate. However, a key lookup request will be routed to some node based on the key's hash value, resulting in failure of locating the index. In Double Indirect Access, the node to which a key lookup request has been routed is named as an indirection server, and it is responsible for relaying the lookup request to the node storing the corresponding index. In order for the indirection server to find out the correct destination node for the lookup request, it maintains a list of nodes' static identifiers whose values (i.e., geographic coordinates) are close to the location of the indirection server. Simulation results show that, when the average number of objects per node is more than 256, our approach is able to reduce the number of packet transmissions by about a half compared to the conventional geographical DHT protocol. It is also shown that, even when the average number of objects per node is about 9-16, the overhead of our approach is comparable with the conventional protocol.

  • Offline Selective Data Deduplication for Primary Storage Systems

    Sejin PARK  Chanik PARK  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2015/10/26
      Vol:
    E99-D No:2
      Page(s):
    370-382

    Data deduplication is a technology that eliminates redundant data to save storage space. Most previous studies on data deduplication target backup storage, where the deduplication ratio and throughput are important. However, data deduplication on primary storage has recently been receiving attention; in this case, I/O latency should be considered equally with the deduplication ratio. Unfortunately, data deduplication causes high sequential-read-latency problems. When a file is created, the file system allocates physically contiguous blocks to support low sequential-read latency. However, the data deduplication process rearranges the block mapping information to eliminate duplicate blocks. Because of this rearrangement, the physical sequentiality of blocks in a file is broken. This makes a sequential-read request slower because it operates like a random-read operation. In this paper, we propose a selective data deduplication scheme for primary storage systems. A selective scheme can achieve a high deduplication ratio and a low I/O latency by applying different data-chunking methods to the files, according to their file access characteristics. In the proposed system, file accesses are characterized by recent access time and the access frequency of each file. No chunking is applied to update-intensive files since they are meaningless in terms of data deduplication. For sequential-read-intensive files, we apply big chunking to preserve their sequentiality on the media. For random-read-intensive files, small chunking is used to increase the deduplication ratio. Experimental evaluation showed that the proposed method achieves a maximum of 86% of an ideal deduplication ratio and 97% of the sequential-read performance of a native file system.

  • A Self-Adjusting Destage Algorithm with High-Low Water Mark in Cached RAID5

    Young Jin NAM  Chanik PARK  

     
    PAPER-Dependable Systems

      Vol:
    E86-D No:12
      Page(s):
    2527-2535

    The High-Low Water Mark destage (HLWM) algorithm is widely used to enable cached RAID5 to flush dirty data from its write cache to disks due to the simplicity of its operations. It starts and stops a destaging process based on the two thresholds that are configured at the initialization time with the best knowledge of its underlying storage performance capability and its workload pattern which includes traffic intensity, access patterns, etc. However, each time the current workload varies from the original, the thresholds need to be re-configured with the changed workload. This paper proposes an efficient destage algorithm which automatically re-configures its initial thresholds according to the changed traffic intensity and access patterns, called adaptive thresholding. The core of adaptive thresholding is to define the two thresholds as the multiplication of the referenced increasing and decreasing rates of the write cache occupancy level and the time required to fill and empty the write cache. We implement the proposed algorithm upon an actual RAID system and then verify the ability of the auto-reconfiguration with synthetic workloads having a different level of traffic intensity and access patterns. Performance evaluations under well-known traced workloads reveal that the proposed algorithm reduces disk IO traffic by about 12% with a 6% increase in the overwrite ratio compared with the HLWM algorithm.

  • Virtualizing Graphics Architecture of Android Mobile Platforms in KVM/ARM Environment

    Sejin PARK  Byungsu PARK  Unsung LEE  Chanik PARK  

     
    PAPER-Software System

      Pubricized:
    2017/04/18
      Vol:
    E100-D No:7
      Page(s):
    1403-1415

    With the availability of virtualization extension in mobile processors, e.g. ARM Cortex A-15, multiple virtual execution domains are efficiently supported in a mobile platform. Each execution domain requires high-performance graphics services for full-featured user interfaces such as smooth scrolling, background image blurring, and 3D images. However, graphics service is hard to be virtualized because multiple service components (e.g. ION and Fence) are involved. Moreover, the complexity of Graphical Processing Unit (GPU) device driver also makes harder virtualizing graphics service. In this paper, we propose a technique to virtualize the graphics architecture of Android mobile platform in KVM/ARM environment. The Android graphics architecture relies on underlying Linux kernel services such as the frame buffer memory allocator ION, the buffer synchronization service Fence, GPU device driver, and the display synchronization service VSync. These kernel services are provided as device files in Linux kernel. Our approach is to para-virtualize these device files based on a split device driver model. A major challenge is to translate guest-view of information into host-view of information, e.g. memory address translation, file descriptor management, and GPU Memory Management Unit (MMU) manipulation. The experimental results show that the proposed graphics virtualization technique achieved almost 84%-100% performance of native applications.

  • Solid-State Disk with Double Data Rate DRAM Interface for High-Performance PCs

    Dong KIM  Kwanhu BANG  Seung-Hwan HA  Chanik PARK  Sung Woo CHUNG  Eui-Young CHUNG  

     
    LETTER-Computer Systems

      Vol:
    E92-D No:4
      Page(s):
    727-731

    We propose a Solid-State Disk (SSD) with a Double Data Rate (DDR) DRAM interface for high-performance PCs. Traditional SSDs simply inherit the interface protocol of Hard Disk Drives (HDD) such as Parallel Advanced Technology Attachment (PATA) or Serial-ATA (SATA) for maintaining the compatibility. However, SSD itself provides much higher performance than HDD, hence the interface also needs to be enhanced. Unlike the traditional SSDs, the proposed SSD with DDR DRAM interface is placed in the North Bridge which provides two or more DDR DRAM interface ports in high-performance PCs. The novelty of our work is on DQS signaling scheme which allows arbitrary Column Address Strobe (CAS) latency unlike typical DDR DRAM interface scheme. The experimental results show that the proposed SSD maximally outperforms the traditional SSD by 8.7 times in read mode, by 1.5 times in write mode. Also, for synthetic workloads, the proposed scheme shows performance improvement over the conventional architecture by a factor of 1.6 times.

  • Effects of Data Scrubbing on Reliability in Storage Systems

    Junkil RYU  Chanik PARK  

     
    PAPER-Computer Systems

      Vol:
    E92-D No:9
      Page(s):
    1639-1649

    Silent data corruptions, which are induced by latent sector errors, phantom writes, DMA parity errors and so on, can be detected by explicitly issuing a read command to a disk controller and comparing the corresponding data with their checksums. Because some of the data stored in a storage system may not be accessed for a long time, there is a high chance of silent data corruption occurring undetected, resulting in data loss. Therefore, periodic checking of the entire data in a storage system, known as data scrubbing, is essential to detect such silent data corruptions in time. The errors detected by data scrubbing will be recovered by the replica or the redundant information maintained to protect against permanent data loss. The longer the period between data scrubbings, the higher the probability of a permanent data loss. This paper proposes a Markov failure and repair model to conservatively analyze the effect of data scrubbing on the reliability of a storage system. We show the relationship between the period of a data scrubbing operation and the number of data replicas to manage the reliability of a storage system by using the proposed model.

  • Fast Reconstruction for Degraded Reads and Recovery Process in Primary Array Storage Systems

    Baegjae SUNG  Chanik PARK  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2016/11/11
      Vol:
    E100-D No:2
      Page(s):
    294-303

    RAID has been widely deployed in disk array storage systems to manage both performance and reliability simultaneously. RAID conducts two performance-critical operations during disk failures known as degraded reads/writes and recovery process. Before the recovery process is complete, reads and writes are degraded because data is reconstructed using data redundancy. The performance of degraded reads/writes is critical in order to meet stipulations in customer service level agreements (SLAs), and the recovery process affects the reliability of a storage system considerably. Both operations require fast data reconstruction. Among the erasure codes for fast reconstruction, Local Reconstruction Codes (LRC) are known to offer the best (or optimal) trade-off between storage overhead, fault tolerance, and the number of disks involved in reconstruction. Originally, LRC was designed for fast reconstruction in distributed cloud storage systems, in which network traffic is a major bottleneck during reconstruction. Thus, LRC focuses on reducing the number of disks involved in data reconstruction, which reduces network traffic. However, we observe that when LRC is applied to primary array storage systems, a major bottleneck in reconstruction results from uneven disk utilization. In other words, underutilized disks can no longer receive I/O requests as a result of the bottleneck of overloaded disks. Uneven disk utilization in LRC is due to its dedicated group partitioning policy to achieve the Maximally Recoverable property. In this paper, we present Distributed Reconstruction Codes (DRC) that support fast reconstruction in primary array storage systems. DRC is designed with group shuffling policy to solve the problem of uneven disk utilization. Experiments on real-world workloads show that DRC using global parity rotation (DRC-G) improves degraded performance by as much as 72% compared to RAID-6 and by as much as 35% compared to LRC under the same reliability. In addition, our study shows that DRC-G reduces the recovery process completion time by as much as 52% compared to LRC.