The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] camera calibration(12hit)

1-12hit
  • Single-Image Camera Calibration for Furniture Layout Using Natural-Marker-Based Augmented Reality

    Kazumoto TANAKA  Yunchuan ZHANG  

     
    LETTER-Multimedia Pattern Processing

      Pubricized:
    2022/03/09
      Vol:
    E105-D No:6
      Page(s):
    1243-1248

    We propose an augmented-reality-based method for arranging furniture using natural markers extracted from the edges of the walls of rooms. The proposed method extracts natural markers and estimates the camera parameters from single images of rooms using deep neural networks. Experimental results show that in all the measurements, the superimposition error of the proposed method was lower than that of general marker-based methods that use practical-sized markers.

  • Extrinsic Camera Calibration of Display-Camera System with Cornea Reflections

    Kosuke TAKAHASHI  Dan MIKAMI  Mariko ISOGAWA  Akira KOJIMA  Hideaki KIMATA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/09/26
      Vol:
    E101-D No:12
      Page(s):
    3199-3208

    In this paper, we propose a novel method to extrinsically calibrate a camera to a 3D reference object that is not directly visible from the camera. We use a human cornea as a spherical mirror and calibrate the extrinsic parameters from the reflections of the reference points. The main contribution of this paper is to present a cornea-reflection-based calibration algorithm with a simple configuration: five reference points on a single plane and one mirror pose. In this paper, we derive a linear equation and obtain a closed-form solution of extrinsic calibration by introducing two ideas. The first is to model the cornea as a virtual sphere, which enables us to estimate the center of the cornea sphere from its projection. The second is to use basis vectors to represent the position of the reference points, which enables us to deal with 3D information of reference points compactly. We demonstrate the performance of the proposed method with qualitative and quantitative evaluations using synthesized and real data.

  • Multiple View Geometry for Curvilinear Motion Cameras

    Cheng WAN  Jun SATO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E94-D No:7
      Page(s):
    1479-1487

    This paper introduces a tensorial representation of multiple cameras with arbitrary curvilinear motions. It enables us to define a multilinear relationship among image points derived from non-rigid object motions viewed from multiple cameras with arbitrary curvilinear motions. We show the new multilinear relationship is useful for generating images and reconstructing 3D non-rigid object motions viewed from cameras with arbitrary curvilinear motions. The method is tested in real image sequences.

  • Computing Spatio-Temporal Multiple View Geometry from Mutual Projections of Multiple Cameras

    Cheng WAN  Jun SATO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E93-D No:9
      Page(s):
    2602-2613

    The spatio-temporal multiple view geometry can represent the geometry of multiple images in the case where non-rigid arbitrary motions are viewed from multiple translational cameras. However, it requires many corresponding points and is sensitive to the image noise. In this paper, we investigate mutual projections of cameras in four-dimensional space and show that it enables us to reduce the number of corresponding points required for computing the spatio-temporal multiple view geometry. Surprisingly, take three views for instance, we no longer need any corresponding point to calculate the spatio-temporal multiple view geometry, if all the cameras are projected to the other cameras mutually for two time intervals. We also show that the stability of the computation of spatio-temporal multiple view geometry is drastically improved by considering the mutual projections of cameras.

  • Estimating Number of People Using Calibrated Monocular Camera Based on Geometrical Analysis of Surface Area

    Hiroyuki ARAI  Isao MIYAGAWA  Hideki KOIKE  Miki HASEYAMA  

     
    PAPER-Image

      Vol:
    E92-A No:8
      Page(s):
    1932-1938

    We propose a novel technique for estimating the number of people in a video sequence; it has the advantages of being stable even in crowded situations and needing no ground-truth data. By analyzing the geometrical relationships between image pixels and their intersection volumes in the real world quantitatively, a foreground image directly indicates the number of people. Because foreground detection is possible even in crowded situations, the proposed method can be applied in such situations. Moreover, it can estimate the number of people in an a priori manner, so it needs no ground-truth data unlike existing feature-based estimation techniques. Experiments show the validity of the proposed method.

  • Adaptive Colorimetric Characterization of Camera for the Variation of White Balance

    Eun-Su KIM  Sung-Hak LEE  Soo-Wook JANG  Kyu-Ik SOHNG  

     
    LETTER

      Vol:
    E88-C No:11
      Page(s):
    2086-2089

    The RGB signals generated by different cameras are not equal for the same scene. Therefore, cameras are characterized based on a CIE standard colorimetric observer. One method of deriving a colorimetric characterization matrix between camera RGB output signals and CIE XYZ tristimulus values is least squares polynomial modeling. Yet, this involves tedious experiments to obtain a camera transfer matrix under various white balance points for the same camera. Accordingly, the current paper proposes a new method for obtaining camera transfer matrices under different white balances using a 33 camera transfer matrix under a specific white balance point.

  • Calibration of Real Scenes for the Reconstruction of Dynamic Light Fields

    Ingo SCHOLZ  Joachim DENZLER  Heinrich NIEMANN  

     
    PAPER-Background Estimation

      Vol:
    E87-D No:1
      Page(s):
    42-49

    The classic light field and lumigraph are two well-known approaches to image-based rendering, and subsequently many new rendering techniques and representations have been proposed based on them. Nevertheless the main limitation remains that in almost all of them only static scenes are considered. In this contribution we describe a method for calibrating a scene which includes moving or deforming objects from multiple image sequences taken with a hand-held camera. For each image sequence the scene is assumed to be static, which allows the reconstruction of a conventional static light field. The dynamic light field is thus composed of multiple static light fields, each of which describes the state of the scene at a certain point in time. This allows not only the modeling of rigid moving objects, but any kind of motion including deformations. In order to facilitate the automatic calibration, some assumptions are made for the scene and input data, such as that the image sequences for each respective time step share one common camera pose and that only the minor part of the scene is actually in motion.

  • Calibration Method by Image Registration with Synthetic Image of 3D Model

    Toru TAMAKI  Masanobu YAMAMOTO  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:5
      Page(s):
    981-985

    We propose a method for camera calibration based on image registration. This method registers two images; one is a real image captured by a camera with a calibration object with known shape and texture, and the other is a synthetic image containing the object. The proposed method estimates the parameters of the rotation and translation of the object by using the depth information of the synthetic image. The Gauss-Newton method is used to minimize the residuals of intensities of the two images. The proposed method does not depend on initial values of the minimization, and is applicable to images with much noise. Experimental results using real images demonstrate the robustness against initial state and noise on the image.

  • A Method for Compensation of Image Distortion with Image Registration Technique

    Toru TAMAKI  Tsuyoshi YAMAMURA  Noboru OHNISHI  

     
    PAPER

      Vol:
    E84-D No:8
      Page(s):
    990-998

    We propose a method for compensating distortion of image by calibrating intrinsic camera parameters by image registration which does not need point-to-point correspondence. The proposed method divides the registration between a calibration pattern and a distorted image observed by a camera into two steps. The first step is the straightforward registration from the pattern in order to correct the displacement due to projection. The second step is the backward registration from the observed image for compensating the distortion of the image. Both of the steps use Gauss-Newton method, a nonlinear optimization technique, to minimize residuals of intensities so that the pattern and the observed image become the same. Experimental results show the usefulness of the proposed method. Finally we discuss the convergence of the proposed method which consists of the two registration steps.

  • A Multiple View Approach for Auto-Calibration of a Rotating and Zooming Camera

    Yongduek SEO  Min-Ho AHN  Ki-Sang HONG  

     
    PAPER

      Vol:
    E83-D No:7
      Page(s):
    1375-1385

    In this paper we deal with the problem of calibrating a rotating and zooming camera, without 3D pattern, whose internal calibration parameters change frame by frame. First, we theoretically show the existence of the calibration parameters up to an orthogonal transformation under the assumption that the skew of the camera is zero. Auto-calibration becomes possible by analyzing inter-image homographies which can be obtained from the matches in images of the same scene, or through direct nonlinear iteration. In general, at least four homographies are needed for auto-calibration. When we further assume that the aspect ratio is known and the principal point is fixed during the sequence then one homography yields camera parameters, and when the aspect ratio is assumed to be unknown with fixed principal point then two homographies are enough. In the case of a fixed principal point, we suggest a method for obtaining the calibration parameters by searching the space of the principal point. If this is not the case, then nonlinear iteration is applied. The algorithm is implemented and validated on several sets of synthetic data. Also experimental results for real images are given.

  • A Camera Calibration Method Using Parallelogramatic Grid Points

    Akira TAKAHASHI  Ikuo ISHII  Hideo MAKINO  Makoto NAKASHIZUKA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E79-D No:11
      Page(s):
    1579-1587

    In this paper, we propose a camera calibration method that estimates both intrinsic parameters (perspective and distortion) and extrinsic parameters (rotational and translational). All camera parameters can be determined from one or more images of planar pattern consists of parallelogramatic grid points. As far as the pattern can be visible, the relative relations between camera and patterns are arbitrary. So, we have only to prepare a pattern, and take one or more images changing the relative relation between camera and the pattern, arbitrarily; neither solid object of ground truth nor precise z-stage are required. Moreover, constraint conditions that are imposed on rotational parameters are explicitly satisfied; no intermediate parameter that connected several actual camera parameters are used. Taking account of the conflicting fact that the amount of distortion is small in the neighborhood of the image center, and that small image has poor clues of 3-D information, we adopt iterative procedure. The best parameters are searched changing the size and number of parallelograms selected from grid points. The procedure of the iteration is as follows: The perspective parameters are estimated from the shape of parallelogram by nonlinear optimizations. The rotational parameters are calculated from the shape of parallelogram. The translational parameters are estimated from the size of parallelogram by least squares method. Then, the distortion parameters are estimated using all grid points by least squares method. The computer simulation demonstrates the efficiency of the proposed method. And the results of the implementation using real images are also shown.

  • Calibration of Linear CCD Cameras Used in the Detection of the Position of the Light Spot

    Toyohiko HAYASHI  Rika KUSUMI  Michio MIYAKAWA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:8
      Page(s):
    912-918

    This paper presents a technique by which any linear CCD camera, be it one with lens distortions, or even one with misaligned lens and CCD, may be calibrated to obtain optimum performance characteristics. The camera-image formation model is described as a polynomial expression, which provides the line-of-sight flat-beam, including the target light-spot. The coefficients of the expression, which are referred to as camera parameters, can be estimated using the linear least-squares technique, in order to minimize the discrepancy between the reference points and the model-driven flat-beam. This technique requires, however, that a rough estimate of camera orientation, as well as a number of reference points, are provided. Experiments employing both computer simulations and actual CCD equipment certified that the model proposed can accurately describe the system, and that the parameter estimation is robust against noise.