The search functionality is under construction.

Author Search Result

[Author] Sam H. NOH(5hit)

1-5hit
  • An Empirical Study of FTL Performance in Conjunction with File System Pursuing Data Integrity

    In Hwan DOH  Myoung Sub SHIM  Eunsam KIM  Jongmoo CHOI  Donghee LEE  Sam H. NOH  

     
    LETTER-Software System

      Vol:
    E93-D No:8
      Page(s):
    2302-2305

    Due to the detachability of Flash storage, which is a dominant portable storage, data integrity stored in Flash storages becomes an important issue. This study considers the performance of Flash Translation Layer (FTL) schemes embedded in Flash storages in conjunction with file system behavior that pursue high data integrity. To assure extreme data integrity, file systems synchronously write all file data to storage accompanying hot write references. In this study, we concentrate on the effect of hot write references on Flash storage, and we consider the effect of absorbing the hot write references via nonvolatile write cache on the performance of the FTL schemes in Flash storage. In so doing, we quantify the performance of typical FTL schemes for a realistic digital camera workload that contains hot write references through experiments on a real system environment. Results show that for the workload with hot write references FTL performance does not conform with previously reported studies. We also conclude that the impact of the underlying FTL schemes on the performance of Flash storage is dramatically reduced by absorbing the hot write references via nonvolatile write cache.

  • RPP: Reference Pattern Based Kernel Prefetching Controller

    Hyo J. LEE  In Hwan DOH  Eunsam KIM  Sam H. NOH  

     
    LETTER-System Programs

      Vol:
    E92-D No:12
      Page(s):
    2512-2515

    Conventional kernel prefetching schemes have focused on taking advantage of sequential access patterns that are easy to detect. However, it is observed that, on random and even sequential references, they may cause performance degradation due to inaccurate pattern prediction and overshooting. To address these problems, we propose a novel approach to work with existing kernel prefetching schemes, called Reference Pattern based kernel Prefetching (RPP). The RPP can reduce negative effects of existing schemes by identifying one more reference pattern, i.e., looping, in addition to random and sequential patterns and delaying starting prefetching until patterns are confirmed to be sequential or looping.

  • NVFAT: A FAT-Compatible File System with NVRAM Write Cache for Its Metadata

    In Hwan DOH  Hyo J. LEE  Young Je MOON  Eunsam KIM  Jongmoo CHOI  Donghee LEE  Sam H. NOH  

     
    PAPER-Software Systems

      Vol:
    E93-D No:5
      Page(s):
    1137-1146

    File systems make use of the buffer cache to enhance their performance. Traditionally, part of DRAM, which is volatile memory, is used as the buffer cache. In this paper, we consider the use of of Non-Volatile RAM (NVRAM) as a write cache for metadata of the file system in embedded systems. NVRAM is a state-of-the-art memory that provides characteristics of both non-volatility and random byte addressability. By employing NVRAM as a write cache for dirty metadata, we retain the same integrity of a file system that always synchronously writes its metadata to storage, while at the same time improving file system performance to the level of a file system that always writes asynchronously. To show quantitative results, we developed an embedded board with NVRAM and modify the VFAT file system provided in Linux 2.6.11 to accommodate the NVRAM write cache. We performed a wide range of experiments on this platform for various synthetic and realistic workloads. The results show that substantial reductions in execution time are possible from an application viewpoint. Another consequence of the write cache is its benefits at the FTL layer, leading to improved wear leveling of Flash memory and increased energy savings, which are important measures in embedded systems. From the real numbers obtained through our experiments, we show that wear leveling is improved considerably and also quantify the improvements in terms of energy.

  • Slack Space Recycling: Delaying On-Demand Cleaning in LFS for Performance and Endurance

    Yongseok OH  Jongmoo CHOI  Donghee LEE  Sam H. NOH  

     
    PAPER-Data Engineering, Web Information Systems

      Vol:
    E96-D No:9
      Page(s):
    2075-2086

    The Log-structured File System (LFS) transforms random writes to a huge sequential one to provide superior write performance on storage devices. However, LFS inherently suffers from overhead incurred by cleaning segments. Specifically, when file system utilization is high and the system is busy, write performance of LFS degenerates significantly due to high cleaning cost. Also, in the newer flash memory based SSD storage devices, cleaning leads to reduced SSD lifetime as it incurs more writes. In this paper, we propose an enhancement to the original LFS to alleviate the performance degeneration due to cleaning when the system is busy. The new scheme, which we call Slack Space Recycling (SSR), allows LFS to delay on-demand cleaning during busy hours such that cleaning may be done when the load is much lighter. Specifically, it writes modified data directly to invalid areas (slack space) of used segments instead of cleaning on-demand, pushing back cleaning for later. SSR also has the added benefit of increasing the lifetime of the now popular SSD storage devices. We implement the new SSR-LFS file system in Linux and perform a large set of experiments. The results of these experiments show that the SSR scheme significantly improves performance of LFS for a wide range of storage utilization settings and that the lifetime of SSDs is extended considerably.

  • Improving the Performance of Linux Operating System via Buffer Cache Partitioning and Prefetching

    Heung Seok JEON  Sam H. NOH  

     
    PAPER-Software Systems

      Vol:
    E86-D No:3
      Page(s):
    616-622

    Buffer caching is an integral part of the operating system. In this paper, we propose a scheme that integrates buffer cache management and prefetching via cache partitioning. The scheme, which we call SA-W2R, is simple to implement, making it a feasible solution in real systems. In its basic form, for buffer replacement, it uses the LRU policy. However, its modular design allows for any replacement policy to be incorporated into the scheme. For prefetching, it uses the LRU-One Block Lookahead (LRU-OBL) approach, eliminating any extra burden that is generally necessary in other prefetching approaches. Implementation studies based on the GNU/Linux kernel version 2.2.14 show that the SA-W2R performs better than the scheme currently used, with a maximum increases of 23% for the workloads considered.