The search functionality is under construction.

Keyword Search Result

[Keyword] depth map(11hit)

1-11hit
  • Single Image Dehazing Using Invariance Principle

    Mingye JU  Zhenfei GU  Dengyin ZHANG  Jian LIU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2017/09/01
      Vol:
    E100-D No:12
      Page(s):
    3068-3072

    In this letter, we propose a novel technique to increase the visibility of the hazy image. Benefiting from the atmospheric scattering model and the invariance principle for scene structure, we formulate structure constraint equations that derive from two simulated inputs by performing gamma correction on the input image. Relying on the inherent boundary constraint of the scattering function, the expected scene albedo can be well restored via these constraint equations. Extensive experimental results verify the power of the proposed dehazing technique.

  • Depth Map Estimation Using Census Transform for Light Field Cameras

    Takayuki TOMIOKA  Kazu MISHIBA  Yuji OYAMADA  Katsuya KONDO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/08/02
      Vol:
    E100-D No:11
      Page(s):
    2711-2720

    Depth estimation for a lense-array type light field camera is a challenging problem because of the sensor noise and the radiometric distortion which is a global brightness change among sub-aperture images caused by a vignetting effect of the micro-lenses. We propose a depth map estimation method which has robustness against sensor noise and radiometric distortion. Our method first binarizes sub-aperture images by applying the census transform. Next, the binarized images are matched by computing the majority operations between corresponding bits and summing up the Hamming distance. An initial depth obtained by matching has ambiguity caused by extremely short baselines among sub-aperture images. After an initial depth estimation process, we refine the result with following refinement steps. Our refinement steps first approximate the initial depth as a set of depth planes. Next, we optimize the result of plane fitting with an edge-preserving smoothness term. Experiments show that our method outperforms the conventional methods.

  • Pixel-Wise Interframe Prediction based on Dense Three-Dimensional Motion Estimation for Depth Map Coding

    Shota KASAI  Yusuke KAMEDA  Tomokazu ISHIKAWA  Ichiro MATSUDA  Susumu ITOH  

     
    LETTER

      Pubricized:
    2017/06/14
      Vol:
    E100-D No:9
      Page(s):
    2039-2043

    We propose a method of interframe prediction in depth map coding that uses pixel-wise 3D motion estimated from encoded textures and depth maps. By using the 3D motion, an approximation of the depth map frame to be encoded is generated and used as a reference frame of block-wise motion compensation.

  • Displacement Mapping with an Augmented Patch Mesh

    Sungchul JUNG  Chang Ha LEE  

     
    LETTER-Computer Graphics

      Pubricized:
    2014/11/27
      Vol:
    E98-D No:3
      Page(s):
    741-744

    Displacement mapping has been widely used for adding geometric surface details to 3D mesh models. However, it requires sufficient tessellation of the mesh if fine details are to be represented. In this paper, we propose a method for applying the displacement mapping even on coarse models by using an augmented patch mesh. The patch mesh is a regularly tessellated flat square mesh, which is mapped onto the target area. Our method applies displacement mapping to the patch mesh for fitting it to the original mesh as well as for adding surface details. We generate a patch map, which stores three-dimensional displacements from the patch mesh to the original mesh. A displacement map is also provided for defining the new surface feature. The target area in the original mesh is then replaced with the patch mesh, and the patch mesh reconstructs the original shape using the patch map and the new surface detail is added using the displacement map. Our results show that our method conveniently adds surface features to various models. The proposed method is particularly useful if the surface features change dynamically since the original mesh is preserved and the separate patch mesh overwrites the target area at runtime.

  • Superpixel Based Depth Map Generation for Stereoscopic Video Conversion

    Jie FENG  Xiangyu LIN  Hanjie MA  Jie HU  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E97-D No:8
      Page(s):
    2131-2137

    In this paper, we propose a superpixel based depth map generation scheme for the application to monoscopic to stereoscopic video conversion. The proposed algorithm employs four main processes to generate depth maps for all frames in the video sequences. First, the depth maps of the key frames in the input sequence are generated by superpixel merging and some user interactions. Second, the frames in the input sequences are over-segmented by Simple Linear Iterative Clustering (SLIC) or depth aided SLIC method depending on whether or not they have the depth maps. Third, each superpixel in current frame is used to match the corresponding superpixel in its previous frame. Finally, depth map is propagated with a joint bilateral filter based on the estimated matching vector of each superpixel. We show an improved performance of the proposed algorithm through experimental results.

  • A Verification-Aware Design Methodology for Thread Pipelining Parallelization

    Guo-An JIAN  Cheng-An CHIEN  Peng-Sheng CHEN  Jiun-In GUO  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E95-D No:10
      Page(s):
    2505-2513

    This paper proposes a verification-aware design methodology that provides developers with a systematic and reliable approach to performing thread-pipelining parallelization on sequential programs. In contrast to traditional design flow, a behavior-model program is constructed before parallelizing as a bridge to help developers gradually leverage the technique of thread-pipelining parallelization. The proposed methodology integrates verification mechanisms into the design flow. To demonstrate the practicality of the proposed methodology, we applied it to the parallelization of a 3D depth map generator with thread pipelining. The parallel 3D depth map generator was further integrated into a 3D video playing system for evaluation of the verification overheads of the proposed methodology and the system performance. The results show the parallel system can achieve 33.72 fps in D1 resolution and 12.22 fps in HD720 resolution through a five-stage pipeline. When verifying the parallel program, the proposed verification approach keeps the performance degradation within 23% and 21.1% in D1 and HD720 resolutions, respectively.

  • Framework of a Contour Based Depth Map Coding Method

    Minghui WANG  Xun HE  Xin JIN  Satoshi GOTO  

     
    PAPER-Coding & Processing

      Vol:
    E95-A No:8
      Page(s):
    1270-1279

    Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.

  • Hierarchical Decomposition of Depth Map Sequences for Representation of Three-Dimensional Dynamic Scenes

    Sung-Yeol KIM  Yo-Sung HO  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E90-D No:11
      Page(s):
    1813-1820

    In this paper, we propose a new scheme to represent three-dimensional (3-D) dynamic scenes using a hierarchical decomposition of depth maps. In the hierarchical decomposition, we split a depth map into four types of images: regular mesh, boundary, feature point and number-of-layer (NOL) images. A regular mesh image is obtained by down-sampling a depth map. A boundary image is generated by gathering pixels of the depth map on the region of edges. For generating feature point images, we select pixels of the depth map on the region of no edges according to their influence on the shape of a 3-D surface, and convert the selected pixels into images. A NOL image includes structural information to manage the other three images. In order to render a frame of 3-D dynamic scenes, we first generate an initial surface utilizing the information of regular mesh, boundary and NOL images. Then, we enhance the initial surface by adding the depth information of feature point images. With the proposed scheme, we can represent consecutive 3-D scenes successfully within the framework of a multi-layer structure. Furthermore, we can compress the data of 3-D dynamic scenes represented by a mesh structure by a 2-D video coder.

  • Two-Dimensional Depth Data Measurement Using an Active Omni-Directional Range Sensor

    Insoo JOUNG  Ihnseok AHN  

     
    PAPER-Systems and Control

      Vol:
    E84-A No:5
      Page(s):
    1288-1292

    We have built an active omni-directional range range sensor that can obtain an omni-directional depth data through the use of a laser conic plane and a conic mirror. In the navigation of the mobile robot, the proposed sensor system makes a laser conic plane by rotating the laser point source at high speed which creates a two-dimensional depth map, in real time, once an image is captured. Also, since the proposed sensor system measures the actual distance of the target objects, it is able to apply the proposed sensor system to other measurement tasks.

  • Optimal Structure-from-Motion Algorithm for Optical Flow

    Naoya OHTA  Kenichi KANATANI  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1559-1566

    This paper presents a new method for solving the structure-from-motion problem for optical flow. The fact that the structure-from-motion problem can be simplified by using the linearization technique is well known. However, it has been pointed out that the linearization technique reduces the accuracy of the computation. In this paper, we overcome this disadvantage by correcting the linearized solution in a statistically optimal way. Computer simulation experiments show that our method yields an unbiased estimator of the motion parameters which almost attains the theoretical bound on accuracy. Our method also enables us to evaluate the reliability of the reconstructed structure in the form of the covariance matrix. Real-image experiments are conducted to demonstrate the effectiveness of our method.

  • Structure Recovery and Motion Estimation from Stereo Motion

    Shin-Chung WANG  Chung-Lin HUANG  

     
    PAPER

      Vol:
    E77-D No:11
      Page(s):
    1247-1258

    This paper presents a modified disparity measurement to recover the depth and a robust method to estimate motion parameters. First, this paper considers phase correspondence for the computation of disparity. It has less computation for disparity than previous methods that use the disparity from correspondence and from correlation. This modified disparity measurement uses the Gabor filter to analyze the local phase property and the exponential filter to analyze the global phase property. These two phases are added to make quasi-linear phases of the stereo image channels which are used for the stereo disparity finding and the structure recovery of scene. Then, we use feature-based correspondence to find the corresponding feature points in temporal image pair. Finally, we combine the depth map and use disparity motion stereo to estimate 3-D motion parameters.