The search functionality is under construction.

Keyword Search Result

[Keyword] perspective transform(3hit)

1-3hit
  • An Improved Look-Up Table-Based FPGA Implementation of Image Warping for CMOS Image Sensors

    Se-yong RO  Lin-bo LUO  Jong-wha CHONG  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E95-D No:11
      Page(s):
    2682-2692

    Image warping is usually used to perform real-time geometric transformation of the images captured by the CMOS image sensor of video camera. Several existing look-up table (LUT)-based algorithms achieve real-time performance; however, the size of the LUT is still large, and it has to be stored in off-chip memory. To reduce latency and bandwidth due to the use of off-chip memory, this paper proposes an improved LUT (ILUT) scheme that compresses the LUT to the point that it can be stored in on-chip memory. First, a one-step transformation is adopted instead of using several on-line calculation stages. The memory size of the LUT is then reduced by utilizing the similarity of neighbor coordinates, as well as the symmetric characteristic of video camera images. Moreover, an elaborate pipeline hardware structure, cooperating with a novel 25-point interpolation algorithm, is proposed to accelerate the system and reduce further memory usage. The proposed system is implemented by a field-programmable gate array (FPGA)-based platform. Two different examples show that the proposed ILUT achieves real-time performance with small memory usage and low system requirements.

  • Capturing Wide-View Images with Uncalibrated Cameras

    Vincent van de LAAR  Kiyoharu AIZAWA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:4
      Page(s):
    895-903

    This paper describes a scheme to capture a wide-view image using a camera setup with uncalibrated cameras. The setup is such that the optical axes are pointed in divergent directions. The direction of view of the resulting image can be chosen freely in any direction between these two optical axes. The scheme uses eight-parameter perspective transformations to warp the images, the parameters of which are obtained by using a relative orientation algorithm. The focal length and scale factor of the two images are estimated by using Powell's multi-dimensional optimization technique. Experiments on real images show the accuracy of the scheme.

  • Motion-Compensated Prediction Method Based on Perspective transform for Coding of Moving Images

    Atsushi KOIKE  Satoshi KATSUNO  Yoshinori HATORI  

     
    PAPER

      Vol:
    E79-B No:10
      Page(s):
    1443-1451

    Hybrid image coding method is one of the most promising methods for efficient coding of moving images. The method makes use of jointly motion-compensated prediction and orthogonal transform like DCT. This type of coding scheme was adopted in several world standards such as H.261 and MPEG in ITU-T and ISO as a basic framework [1], [2]. Most of the work done in motion-compensated prediction has been based on a block matching method. However, when input moving images include complicated motion like rotation or enlargement, it often causes block distortion in decoded images, especially in the case of very low bit-rate image coding. Recently, as one way of solving this problem, some motion-compensated prediction methods based on an affine transform or bilinear transform were developed [3]-[8]. These methods, however, cannot always express the appearance of the motion in the image plane, which is projected plane form 3-D space to a 2-D plane, since the perspective transform is usually assumed. Also, a motion-compensation method using a perspective transform was discussed in Ref, [6]. Since the motion detection method is defined as an extension of the block matching method, it can not always detect motion parameters accurately when compared to gradient-based motion detection. In this paper, we propose a new motion-compensated prediction method for coding of moving images, especially for very low bit-rate image coding such as less than 64 kbit/s. The proposed method is based on a perspective transform and the constraint principle for the temporal and spatial gradients of pixel value, and complicated motion in the image plane including rotation and enlargement based on camera zooming can also be detected theoretically in addition to translational motion. A computer simulation was performed using moving test images, and the resulting predicted images were compared with conventional methods such as the block matching method using the criteria of SNR and entropy. The results showed that SNR and entropy of the proposed method are better than those of conventional methods. Also, the proposed method was applied to very low bit-rate image coding at 16 kbit/s, and was compared with a conventional method, H.261. The resulting SNR and decoded images in the proposed method were better than those of H.261. We conclude that the proposed method is effective as a motion-compensated prediction method.