The search functionality is under construction.

Author Search Result

[Author] Inbum JUNG(4hit)

1-4hit
  • A Scheduling Policy for Blocked Programs in Multiprogrammed Shared-Memory Multiprocessors

    Inbum JUNG  Jongwoong HYUN  Joonwon LEE  

     
    PAPER-Software Systems

      Vol:
    E83-D No:9
      Page(s):
    1762-1771

    Shared memory multiprocessors are frequently used as compute servers with multiple parallel programs executing at the same time. In such environments, an operating system switches the contexts of multiple processes. When the operating system switches contexts, in addition to the cost of saving the context of the process being swapped out and that of bringing in the context of the new process to be run, the cache performance of processors also can be affected. The blocked algorithm improves cache performance by increasing the locality of memory references. In a blocked program using this algorithm, program performance can be significantly affected by the reuse of a block loaded into a cache memory. If frequent context switching replaces the block before it is completely reused, the cache locality in a blocked program cannot be successfully exploited. To address this problem, we propose a preemption-safe policy to utilize the cache locality of blocked programs in a multiprogrammed system. The proposed policy delays context switching until a block is fully reused within a program, but also compensates for the monopolized processor time on processor scheduling mechanisms. Our simulation results show that in a situation where blocked programs are run on multiprogrammed shared-memory multiprocessors, the proposed policy improves the performance of these programs due to a decrease in cache misses. In such situations, it also has a beneficial impact on the overall system performance due to the enhanced processor utilization.

  • Content Sniffer Based Load Distribution in a Web Server Cluster

    Jongwoong HYUN  Inbum JUNG  Joonwon LEE  Seungryoul MAENG  

     
    PAPER-Software Systems

      Vol:
    E86-D No:7
      Page(s):
    1258-1269

    Recently, layer-4 (L4) switches have been widely used as load balancing front-end routers for Web server clusters. The typical L4 switch attempts to balance load among the servers by estimating load using the load metrics measured in the front-end and/or the servers. However, insufficient load metrics, measurement overhead, and feedback delay often cause misestimate of server load. This may incur significant dynamic load imbalance among the servers particularly when the variation of requested content is high. In this paper, we propose a new content sniffer based load distribution strategy. By sniffing the requests being forwarded to the servers and by extracting load metrics from them, the L4 switch with our strategy more timely and accurately estimates server load without the help of back-end servers. Thus it can properly react to dynamic load imbalance among the servers under various workloads. Our experimental results demonstrate substantial performance improvements over other load balancing strategies used in the typical L4 switch.

  • Coordinated Ramp Metering for Minimum Waiting Time and Limited Ramp Storage

    Soobin JEON  Inbum JUNG  

     
    PAPER-Intelligent Transport System

      Vol:
    E99-A No:10
      Page(s):
    1843-1855

    Ramp metering is the most effective and direct method to control a vehicle entering a freeway. This study proposes a novel density-based ramp metering method. Existing methods typically use flow data that has low reliability, and they suffer from various problems. Furthermore, when ramp metering is performed based on freeway congestion, additional congestion and over-capacity can occur in the ramp. To solve these problems faced with existing methods, the proposed method uses the density and acceleration data of vehicles on the freeway and considers the ramp status. The experimental environment was simulated using PTV Corporation's VISSIM simulator. The Traffic Information and Condition Analysis System was developed to control the VISSIM simulator. The experiment was conducted between 2:00 PM and 7:00 PM on October 5, 2014, during severe traffic congestion. The simulation results showed that total travel time was reduced by 10% compared to existing metering system during the peak time. Thus, we solved the problem of ramp congestion and over-capacity.

  • Buddy Coherence: An Adaptive Granularity Handling Scheme for Page-Based DSM

    Sangbum LEE  Inbum JUNG  Joonwon LEE  

     
    PAPER-Computer Systems

      Vol:
    E81-D No:12
      Page(s):
    1473-1482

    Page-based DSM systems suffer from false sharing since they use a large page as a coherence unit. The optimal page size is dynamically affected by application characteristics. Therefore, a fixed-size page cannot satisfy various applications even if it is small as a cache line size. In this paper we present a software-only coherence protocol called BCP (Buddy Coherence Protocol) to support multiple page sizes that vary adaptively according to the behavior of each application during run time. In BCP, the address of a remote access and the address of the most recent local access is compared. If they are to the different halves of a page, BCP considers it as false sharing and demotes the page to two subpages of equal size. If two contiguous pages belong to the same node, BCP promotes two pages to a superpage to reduce the number of the following coherence activities. We also suggest a mechanism to detect data sharing patterns to optimize the protocol. It detects and keeps the sharing pattern for each page by a state transition mechanism. By referring to those patterns, BCP selectively demotes the page and increases the effectiveness of a demotion. Self-invalidation of the migratorily shared page is also employed to reduce the number of invalidations. Our simulations show that the optimized BCP outperforms almost all the best cases of the write-invalidate protocols using fixed-size pages. BCP improves performance by 42.2% for some applications when compared against the case of the fixed-size page.