The search functionality is under construction.

Author Search Result

[Author] Takeshi CHUJOH(2hit)

1-2hit
  • Overlapped Filtering for Simulcast Video Coding

    Takeshi CHUJOH  

     
    LETTER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    2037-2038

    In video coding, layered coding is beneficial for applications, because it can encode a number of input sources efficiently and achieve scalability functions. However, in order to achieve the functions, some specific codecs are needed. Meanwhile, although the coding efficiency is insufficient, simulcast that encodes a number of input sources independently is versatile. In this paper, we propose postprocessing for simulcast video coding that can improve picture quality and coding efficiency without using any layered coding. In particular, with a view to achieving spatial scalability, we show that the overlapped filtering (OLF) improves picture quality of the high-resolution layer by using the low-resolution layer.

  • Neural Network-Based Post-Processing Filter on V-PCC Attribute Frames

    Keiichiro TAKADA  Yasuaki TOKUMO  Tomohiro IKAI  Takeshi CHUJOH  

     
    LETTER

      Pubricized:
    2023/07/13
      Vol:
    E106-D No:10
      Page(s):
    1673-1676

    Video-based point cloud compression (V-PCC) utilizes video compression technology to efficiently encode dense point clouds providing state-of-the-art compression performance with a relatively small computation burden. V-PCC converts 3-dimensional point cloud data into three types of 2-dimensional frames, i.e., occupancy, geometry, and attribute frames, and encodes them via video compression. On the other hand, the quality of these frames may be degraded due to video compression. This paper proposes an adaptive neural network-based post-processing filter on attribute frames to alleviate the degradation problem. Furthermore, a novel training method using occupancy frames is studied. The experimental results show average BD-rate gains of 3.0%, 29.3% and 22.2% for Y, U and V respectively.