The search functionality is under construction.

Author Search Result

[Author] Yong-Uk YOON(2hit)

1-2hit
  • Enhanced Derivation of Model Parameters for Cross-Component Linear Model (CCLM) in VVC

    Yong-Uk YOON  Do-Hyeon PARK  Jae-Gon KIM  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2019/10/30
      Vol:
    E103-D No:2
      Page(s):
    469-471

    Cross-component linear model (CCLM) has been recently adopted as a chroma intra-prediction tool in Versatile Video Coding (VVC), which is being developed as a new video coding standard. CCLM predicts chroma components from luma components through a linear model based on assumption of linear correlation between both components. A linear model is derived from the reconstructed neighboring luma and chroma samples of the current coding block by linear regression. A simplified linear modeling method recently adopted in the test model of VVC (VTM) 3.0 significantly reduces computational complexity of deriving model parameters with considerable coding loss. This letter proposes a method of linear modeling to compensate the coding loss of the simplified linear model. In the proposed method, the model parameters which are quite roughly derived in the existing simplified linear model are refined more accurately using individual method to derive each parameter. Experimental results show that, compared to VTM 3.0, the proposed method gives 0.08%, 0.52% and 0.55% Bjotegaard-Delta (BD)-rate savings, for Y, Cb and Cr components, respectively, in the All-Intra (AI) configuration with negligible computational complexity increase.

  • Efficient Methods of Inactive Regions Padding for Segmented Sphere Projection (SSP) of 360 Video

    Yong-Uk YOON  Yong-Jo AHN  Donggyu SIM  Jae-Gon KIM  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2018/08/20
      Vol:
    E101-D No:11
      Page(s):
    2836-2839

    In this letter, methods of inactive regions padding for Segmented Sphere Projection (SSP) of 360 videos are proposed. A 360 video is projected onto a 2D plane to be coded with diverse projection formats. Some projection formats have inactive regions in the converted 2D plane such as SSP. The inactive regions may cause visual artifacts as well as coding efficiency decrease due to discontinuous boundaries between active and inactive regions. In this letter, to improve coding efficiency and reduce visual artifacts, the inactive regions are padded by using two types of adjacent pixels in either rectangular-face or circle-face boundaries. By padding the inactive regions with the highly correlated adjacent pixels, the discontinuities between active and inactive regions are reduced. The experimental results show that, in terms of end-to-end Weighted to Spherically uniform PSNR (WS-PSNR), the proposed methods achieve 0.3% BD-rate reduction over the existing padding method for SSP. In addition, the visual artifacts along the borders between discontinuous faces are noticeably reduced.