1-2hit |
In this paper, a novel projection-based method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor scene. In general, conventional registration methods for partial 3D point clouds require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has a low precision. To overcome these drawbacks, a projection-based registration method is proposed. Firstly, depth images are refined based on both temporal and spatial properties. The former involves excluding 3D points with large variation, and the latter fills up holes referring to four neighboring 3D points, respectively. Secondly, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to search for correspondences through the modified KLT. Then, fine registration is carried out by minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring to the colors of corresponding points and reconstruct an indoor scene by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have a low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor scene. The generated model can be adopted for interaction with as well as navigation in a virtual environment.
Tadahiko HAMAGUCHI Toshiaki FUJII Toshio HONDA
A 3D display using super high-density multi-view images should enable reproduction of natural stereoscopic views. In the super multi-view display system, viewpoints are sampled at an interval narrower than the diameter of the pupil of a person's eye. With the parallax produced by a single eye, this system can pull out the accommodation of an eye to an object image. We are now working on a real-time view-interpolation system for the super multi-view 3D display. A multi-view camera using convergence capturing to prevent resolution degradation captures multi-view images of an object. Most of the data processing is used for view interpolation and rectification. View interpolation is done using a high-speed image-processing board with digital-signal-processor (DSP) chips or single instruction stream and multiple data streams (SIMD) parallel processor chips. Adaptive filtering of the epipolar plane images (EPIs) is used for the view-interpolation algorithm. The multi-view images are adaptively interpolated using the most suitable filters for the EPIs. Rectification, a preprocess, converts the multi-view images in convergence capturing into the ones in parallel capturing. The use of rectified multi-view images improves the processing speed by limiting the interpolation processing in EPI.