The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] object-based video coding(2hit)

1-2hit
  • Shape-Direction-Adaptive Lifting-Based Discrete Wavelet Transform for Arbitrarily Shaped Segments in Image Compression

    Sheng-Fuu LIN  Chien-Kun SU  

     
    PAPER-Pattern Recognition

      Vol:
    E91-D No:10
      Page(s):
    2467-2476

    In this paper, a new lifting-based shape-direction-adaptive discrete wavelet transform (SDA-DWT) which can be used for arbitrarily shaped segments is proposed. The SDA-DWT contains three major techniques: the lifting-based DWT, the adaptive directional technique, and the concept of object-based compression in MPEG-4. With SDA-DWT, the number of transformed coefficients is equal to the number of pixels in the arbitrarily shaped segment image, and the spatial correlation across subbands is well preserved. SDA-DWT also can locally adapt its filtering directions according to the texture orientations to improve energy compaction for images containing non-horizontal or non-vertical edge textures. SDA-DWT can be applied to any application that is wavelet based and the lifting technique provides much flexibility for hardware implementation. Experimental results show that, for still object images with rich orientation textures, SDA-DWT outperforms SA-DWT up to 5.88 dB in PSNR under 2.15-bpp (bit / object pixel) condition, and reduces the bit-budget up to 28.5% for lossless compression. SDA-DWT also outperforms DA-DWT up to 5.44 dB in PSNR under 3.28-bpp condition, and reduces the bit-budget up to 14.0%.

  • Feature-Based Error Concealment for Object-Based Video

    Pei-Jun LEE  Homer H. CHEN  Wen-June WANG  Liang-Gee CHEN  

     
    PAPER-Multimedia Systems for Communications" Multimedia Systems for Communications

      Vol:
    E88-B No:6
      Page(s):
    2616-2626

    In this paper, a new error concealment algorithm for MPEG-4 object-based video is presented. The algorithm consists of a feature matching step to identify temporally corresponding features between video frames and an affine parameter estimation step to find the motion of the feature points. In the feature matching step, an efficient cross-radial search (CRS) method is developed to find the best matching points. In the affine parameter estimation step, a non-iterative least squares estimation algorithm is developed to estimate the affine parameters. An attractive feature of the algorithm is that the shape data and texture data are handled by the same method. Unlike previous methods, this unified approach works for the case where the video object undergoes a drastic movement, such as a sharp turn. Experimental results show that the proposed algorithm performs much better than previous approaches by about 0.3-2.8 dB for shape data and 1.6-5.0 dB for texture data.