The search functionality is under construction.

Keyword Search Result

[Keyword] shape reconstruction(9hit)

1-9hit
  • 3D Reconstruction with Globally-Optimized Point Selection

    Norimichi UKITA  Kazuki MATSUDA  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E95-D No:12
      Page(s):
    3069-3077

    This paper proposes a method for reconstructing accurate 3D surface points. To this end, robust and dense reconstruction with Shape-from-Silhouettes (SfS) and accurate multiview stereo are integrated. Unlike gradual shape shrinking and/or bruteforce large space search by existing space carving approaches, our method obtains 3D points by SfS and stereo independently, and then selects correct ones from them. The point selection is achieved in accordance with spatial consistency and smoothness of 3D point coordinates and normals. The globally optimized points are selected by graph-cuts. Experimental results with several subjects containing complex shapes demonstrate that our method outperforms existing approaches and our previous method.

  • Direct Shape Carving: Smooth 3D Points and Normals for Surface Reconstruction

    Kazuki MATSUDA  Norimichi UKITA  

     
    PAPER-3D Reconstruction

      Vol:
    E95-D No:7
      Page(s):
    1811-1818

    This paper proposes a method for reconstructing a smooth and accurate 3D surface. Recent machine vision techniques can reconstruct accurate 3D points and normals of an object. The reconstructed point cloud is used for generating its 3D surface by surface reconstruction. The more accurate the point cloud, the more correct the surface becomes. For improving the surface, how to integrate the advantages of existing techniques for point reconstruction is proposed. Specifically, robust and dense reconstruction with Shape-from-Silhouettes (SfS) and accurate stereo reconstruction are integrated. Unlike gradual shape shrinking by space carving, our method obtains 3D points by SfS and stereo independently and accepts the correct points reconstructed. Experimental results show the improvement by our method.

  • Human Foot Reconstruction from Multiple Camera Images with Foot Shape Database

    Jiahui WANG  Hideo SAITO  Makoto KIMURA  Masaaki MOCHIMARU  Takeo KANADE  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E89-D No:5
      Page(s):
    1732-1742

    Recently, researches and developments for measuring and modeling of the human body have been receiving much attention. Our aim is to reconstruct an accurate shape of a human foot from multiple camera images, which can capture dynamic behavior of the object. In this paper, a foot-shape database is used for accurate reconstruction of human foot. By using Principal Component Analysis, the foot shape can be represented with new meaningful variables. The dimensionality of the data is also reduced. Thus, the shape of object can be recovered efficiently, even though the object is partially occluded in some input views. To demonstrate the proposed method, two kinds of experiments are presented: reconstruction of human foot in a virtual reality environment with CG multi-camera images, and in real world with eight CCD cameras. In the experiments, the reconstructed shape error with our method is around 2 mm in average, while the error is more than 4 mm with conventional volume intersection method.

  • 3D Fundus Shape Reconstruction and Display from Stereo Fundus Images

    Koichiro DEGUCHI  Daisuke KAWAMATA  Kanae MIZUTANI  Hidekata HONTANI  Kiwa WAKABAYASHI  

     
    PAPER

      Vol:
    E83-D No:7
      Page(s):
    1408-1414

    A new method to recover and display 3D fundus pattern on the inner bottom surface of eye-ball from stereo fundus image pair is developed. For the fundus stereo images, a simple stereo technique does not work, because the fundus is observed through eye lens and a contact wide-angle enlarging lens. In this method, utilizing the fact that fundus forms a part of sphere, we identify their optical parameters and correct the skews of the lines-of-sight. Then, we obtain 3D images of the fundus by back-projecting the stereo images.

  • Modeling of Urban Scenes by Aerial Photographs and Simply Reconstructed Buildings

    Katsuyuki KAMEI  Wayne HOY  Takashi TAMADA  Kazuo SEO  

     
    PAPER

      Vol:
    E83-D No:7
      Page(s):
    1441-1449

    In many fields such as city administration and facilities management, there are an increasing number of requests for a Geographic Information System (GIS) that provides users with automated mapping functions. A mechanism which displays 3D views of an urban scene is particularly required because it would allow the construction of an intuitive and understandable environment for managing objects in the scene. In this paper, we present a new urban modeling system utilizing both image-based and geometry-based approaches. Our method is based on a new concept in which a wide urban area can be displayed with natural photo-realistic images, and each object drawn in the view can be identified by pointing to it. First, to generate natural urban views from any viewpoint, we employ an image-based rendering method, Image Walkthrough, and modify it to handle aerial images. This method can interpolate and generate natural views by assembling several source photographs. Next, to identify each object in the scene, we recover its shape using computer vision techniques (a geometry-based approach). The rough shape of each building is reconstructed from various aerial images, and then its drawn position on the generated view is also determined. This means that it becomes possible to identify each building from an urban view. We have combined both of these approaches yielding a new style of urban information management. The users of the system can enjoy an intuitive understanding of the area and easily identify their target, by generating natural views from any viewpoint and suitably reconstructing the shapes of objects. We have made a prototype system of this new concept of GIS, which have shown the validity of our method.

  • High Speed 3D Reconstruction by Spatio-Temporal Division of Video Image Processing

    Yoshinari KAMEDA  Takeo TAODA  Michihiko MINOH  

     
    PAPER

      Vol:
    E83-D No:7
      Page(s):
    1422-1428

    A high speed 3D shape reconstruction method with multiple video cameras and multiple computers on LAN is presented. The video cameras are set to surround the real 3D space where people exist. Reconstructed 3D space is displayed in voxel format and users can see the space from any viewpoint with a VR viewer. We implemented a prototype system that can work out the 3D reconstruction with the speed of 10.55 fps in 313 ms delay.

  • Factorization Method for Structure from Perspective Multi-View Images

    Koichiro DEGUCHI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:11
      Page(s):
    1281-1289

    This paper describes a factorization-based algorithm that reconstructs 3D object structure as well as motion from a set of multiple uncalibrated perspective images. The factorization method introduced by Tomasi-Kanade is believed to be applicable under the assumption of linear approximations of imaging system. In this paper we describe that the method can be extended to the case of truly perspective images if projective depths are recovered. We established this fact by interpreting their purely mathematical theory in terms of the projective geometry of the imaging system and thereby, giving physical meanings to the parameters involved. We also provide a method to recover them using the fundamental matrices and epipoles estimated from pairs of images in the image set. Our method is applicable for general cases where the images are not taken by a single moving camera but by different cameras having individual camera parameters. The experimental results clearly demonstrates the feasibility of the proposed method.

  • Unique Shape Reconstruction Using Interreflections

    Jun YANG  Dili ZHANG  Noboru OHNISHI  Noboru SUGIE  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:3
      Page(s):
    307-316

    We discuss the uniqueness of 3-D shape reconstruction of a polyhedron from a single shading image. First, we analytically show that multiple convex (and concave) shape solutions usually exist for a simple polyhedron if interreflections are not considered. Then we propose a new approach to uniquely determine the concave shape solution using interreflections as a constraint. An example, in which two convex and two concave shapes were obtained from a single shaded image for a trihedral corner, has been given by Horn. However, how many solutions exist for a general polyhedron wasn't described. We analytically show that multiple convex (and concave) shape solutions usually exist for a pyramid using a reflectance map, if interreflection distribution is not considered. However, if interreflection distribution is used as a constraint that limits the shape solution for a concave polyhedron, the polyhedral shape can be uniquely determined. Interreflections, which were considered to be deleterious in conventional approaches, are used as a constraint to determine the shape solution in our approach.

  • Reconstruction of Polyhedra by a Mechanical Theorem Proving Method

    Kyun KOH  Koichiro DEGUCHI  Iwao MORISHITA  

     
    PAPER

      Vol:
    E76-D No:4
      Page(s):
    437-445

    In this paper we propose a new application of Wu's mechanical theorem proving method to reconstruct polyhedra in 3-D space from their projection image. First we set up three groups of equations. The first group is of the geometric relations expressing that vertices are on a plane segment, on a line segment, and forming angle in 3-D space. The second is of those relations on image plane. And the rest is of the relations between the vertices in 3-D space and their correspondence on image plane. Next, we classify all the groups of equations into two sets, a set of hypotheses and a conjecture. We apply this method to seven cases of models. Then, we apply Wu's method to prove that the hypotheses follow the conjecture and obtain pseudodivided remainders of the conjectures, which represent relations of angles or lengths between 3-D space and their projected image. By this method we obtained new geometrical relations for seven cases of models. We also show that, in the region in image plane where corresponding spatial measures cannot reconstructed, leading coefficients of hypotheses polynomials approach to zero. If the vertex of an image angle is in such regions, we cannot calculate its spatial angle by direct manipulation of the hypothesis polynomials and the conjecture polynomial. But we show that by stability analysis of the pseudodivided remainder the spatial angles can be calculated even in those regions.