The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] camera(154hit)

141-154hit(154hit)

  • Real-Time Tracking of Multiple Moving Object Contours in a Moving Camera Image Sequence

    Shoichi ARAKI  Takashi MATSUOKA  Naokazu YOKOYA  Haruo TAKEMURA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:7
      Page(s):
    1583-1591

    This paper describes a new method for detection and tracking of moving objects from a moving camera image sequence using robust estimation and active contour models. We assume that the apparent background motion between two consecutive image frames can be approximated by affine transformation. In order to register the static background, we estimate affine transformation parameters using LMedS (Least Median of Squares) method which is a kind of robust estimator. Split-and-merge contour models are employed for tracking multiple moving objects. Image energy of contour models is defined based on the image which is obtained by subtracting the previous frame transformed with estimated affine parameters from the current frame. We have implemented the method on an image processing system which consists of DSP boards for real-time tracking of moving objects from a moving camera image sequence.

  • A Multiple View Approach for Auto-Calibration of a Rotating and Zooming Camera

    Yongduek SEO  Min-Ho AHN  Ki-Sang HONG  

     
    PAPER

      Vol:
    E83-D No:7
      Page(s):
    1375-1385

    In this paper we deal with the problem of calibrating a rotating and zooming camera, without 3D pattern, whose internal calibration parameters change frame by frame. First, we theoretically show the existence of the calibration parameters up to an orthogonal transformation under the assumption that the skew of the camera is zero. Auto-calibration becomes possible by analyzing inter-image homographies which can be obtained from the matches in images of the same scene, or through direct nonlinear iteration. In general, at least four homographies are needed for auto-calibration. When we further assume that the aspect ratio is known and the principal point is fixed during the sequence then one homography yields camera parameters, and when the aspect ratio is assumed to be unknown with fixed principal point then two homographies are enough. In the case of a fixed principal point, we suggest a method for obtaining the calibration parameters by searching the space of the principal point. If this is not the case, then nonlinear iteration is applied. The algorithm is implemented and validated on several sets of synthetic data. Also experimental results for real images are given.

  • Estimation of Camera Rotation Using Quasi Moment Features

    Hiroyuki SHIMAI  Toshikatsu KAWAMOTO  Takaomi SHIGEHARA  Taketoshi MISHIMA  Masaru TANAKA  Takio KURITA  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1005-1013

    We present two estimation methods for camera rotation from two images obtained by the active camera before and after rotation. Based on the representation of the projected rotation group, quasi moment features are constructed. Camera rotation can be estimated by applying the singular value decomposition (SVD) or Newton's method to tensor quasi moment features. In both cases, we can estimate 3D rotation of the active camera from only two projected images. We also give some experiments for the estimation of the actual active camera rotation to show the effectiveness of these methods.

  • Creating Virtual Environment Based on Video Data with Forward Motion

    Xiaohua ZHANG  Hiroki TAKAHASHI  Masayuki NAKAJIMA  

     
    PAPER-Multimedia Pattern Processing

      Vol:
    E83-D No:4
      Page(s):
    931-936

    The construction of photo-realistic 3D scenes from video data is an active and competitive area of research in the fields of computer vision, image processing and computer graphics. In this paper we address our recent work in this area. Unlike most methods of 3D scene construction, we consider the generation of virtual environments from video sequence with a video-cam's forward motion. Each frame is decomposed into sub-images, which are registered correspondingly using the Levenberg-Marquardt iterative algorithm to estimate motion parameters. The registered sub-images are correspondingly pasted together to form a pseudo-3D space. By controlling the position and direction, the virtual camera can walk through this virtual space to generate novel 2D views to acquire an immersive impression. Even if the virtual camera goes deep into this virtual environment, it can still obtain a novel view while maintaining relatively high resolution.

  • Accurate Shape from Focus Using Second Order Curved Search Windows

    Joungil YUN  Tae S. CHOI  

     
    LETTER-Computer Graphics

      Vol:
    E83-A No:3
      Page(s):
    571-574

    In this letter we propose a new Shape from Focus (SFF) method using piecewise curved search windows for accurate 3-D shape recovery. The new method uses piecewise curved windows to compute focus measure and to search for Focus Image Surface (FIS) in image space. The experimental result shows that our new method gives more accurate result than the previous SFF methods.

  • Enhanced Backscattering from Random Media with Multiple Suspensions

    Yasuyuki OKAMURA  Hiroyuki KAI  Sadahiko YAMAMOTO  

     
    PAPER-Electromagnetic Theory

      Vol:
    E82-C No:10
      Page(s):
    1853-1856

    Experiment is reported of enhanced backscattering of light in binary and ternary suspensions of rutile and/or alumina particles. With a conventional CCD camera system for observing the phenomena, the angular line shape and the enhancement factor were agreed with the theoretically predicted curve and value. Observation of the angular distribution scattered at the backscattered direction supported the hypothesis proposed by Pine et al. , in which the transport mean free path of the polydisperse mixture can be expressed in terms of summing its reciprocal values weighted over the particle sizes.

  • Omnidirectional Sensing and Its Applications

    Yasushi YAGI  

     
    INVITED SURVEY PAPER

      Vol:
    E82-D No:3
      Page(s):
    568-579

    The goal of this paper is to present a critical survey of existing literature on an omnidirectional sensing. The area of vision application such as autonomous robot navigation, telepresence and virtual reality is expanding by use of a camera with a wide angle of view. In particular, a real-time omnidirectional camera with a single center of projection is suitable for analyzing and monitoring, because we can easily generate any desired image projected on any designated image plane, such as a pure perspective image or a panoramic image, from the omnidirectional input image. In this paper, I review designs and principles of existing omnidirectional cameras, which can acquire an omnidirectional (360 degrees) field of view, and their applications in fields of autonomous robot navigation, telepresence, remote surveillance and virtual reality.

  • Lower Bound of Image Correlation Coefficient as a Measure of Image Quality

    Bongsoon KANG  Hoongee YANG  

     
    LETTER-Antennas and Propagation

      Vol:
    E81-B No:4
      Page(s):
    811-813

    This letter derives the theoretical lower bound on image correlation coefficient that judges the extent of image degradation. It is shown that the correlation coefficient depends on phase-error variance in antenna aperture domain. Thereby, one can predict image quality before image formation. The theoretical bound is verified by experimental data where the dominant scatterer algorithm (DSA) is used for phase synchronization.

  • A Camera Calibration Method Using Parallelogramatic Grid Points

    Akira TAKAHASHI  Ikuo ISHII  Hideo MAKINO  Makoto NAKASHIZUKA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E79-D No:11
      Page(s):
    1579-1587

    In this paper, we propose a camera calibration method that estimates both intrinsic parameters (perspective and distortion) and extrinsic parameters (rotational and translational). All camera parameters can be determined from one or more images of planar pattern consists of parallelogramatic grid points. As far as the pattern can be visible, the relative relations between camera and patterns are arbitrary. So, we have only to prepare a pattern, and take one or more images changing the relative relation between camera and the pattern, arbitrarily; neither solid object of ground truth nor precise z-stage are required. Moreover, constraint conditions that are imposed on rotational parameters are explicitly satisfied; no intermediate parameter that connected several actual camera parameters are used. Taking account of the conflicting fact that the amount of distortion is small in the neighborhood of the image center, and that small image has poor clues of 3-D information, we adopt iterative procedure. The best parameters are searched changing the size and number of parallelograms selected from grid points. The procedure of the iteration is as follows: The perspective parameters are estimated from the shape of parallelogram by nonlinear optimizations. The rotational parameters are calculated from the shape of parallelogram. The translational parameters are estimated from the size of parallelogram by least squares method. Then, the distortion parameters are estimated using all grid points by least squares method. The computer simulation demonstrates the efficiency of the proposed method. And the results of the implementation using real images are also shown.

  • Structure and Motion of 3D Moving Objects from Multi-Views

    Takeaki Y. MORI  Satoshi SUZUKI  Takayuki YASUNO  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1598-1606

    This paper proposes a new method that can robustly recover 3D structure and 3D motion of 3D moving objects from a few multi-views. It recovers 3D feature points by obtaining intersections of back-projection lines which are connected from the camera's optical centers thorough projected feature points on the image planes corresponding to the different cameras. We show that our method needs only six views to suppress false 3D feature points in most cases by discussing the relation between the occurrence probability of false 3D feature points and the number of views. This discussion gives us a criterion to design the optimal multi-camera system for recovering 3D structure and 3D motion of 3D moving objects. An experimental multi-camera system is constructed to confirm the validity of our method. This system can take images from six different views at once and record motion image sequence from each view over a period of a few seconds. It is tested successfully on recovering the 3D structure of Vinus's plaster head and on recovering the 3D structure and 3D motion of a moving hand.

  • Object Recognition in Image Sequences with Hopfield Neural Network

    Kouichirou NISHIMURA  Masao IZUMI  Kunio FUKUNAGA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E78-D No:8
      Page(s):
    1058-1064

    In case of object recognition using 3-D configuration data, the scale and poses of the object are important factors. If they are not known, we can not compare the object with the models in the database. Hence we propose a strategy for object recognition independently of its scale and poses, which is based on Hopfield neural network. And we also propose a strategy for estimation of the camera motion to reconstruct 3-D configuration of the object. In this strategy, the camera motion is estimated only with the sequential images taken by a moving camera. Consequently, the 3-D configuration of the object is reconstructed only with the sequential images. And we adopt the multiple regression analysis for estimation of the camera motion parameters so as to reduce the errors of them.

  • Passive Depth Acquisition for 3D Image Displays

    Kiyohide SATOH  Yuichi OHTA  

     
    INVITED PAPER

      Vol:
    E77-D No:9
      Page(s):
    949-957

    In this paper, we first discuss on a framework for a 3D image display system which is the combination of passive sensing and active display technologies. The passive sensing enables to capture real scenes under natural condition. The active display enables to present arbitrary views with proper motion parallax following the observer's motion. The requirements of passive sensing technology for 3D image displays are discussed in comparison with those for robot vision. Then, a new stereo algorithm, called SEA (Stereo by Eye Array), which satisfies the requirements is described in detail. The SEA uses nine images captured by a 33 camera array. It has the following features for depth estimation: 1) Pixel-based correspondence search enables to obtain a dense and high-spatial-resolution depth map. 2) Correspondence ambiguity for linear edges with the orientation parallel to a particular baseline is eliminated by using multiple baselines with different orientations. 3) Occlusion can be easily detected and an occlusion-free depth map with sharp object boundaries is generated. The feasibility of the SEA is demonstrated by experiments by using real image data.

  • Image Synthesis Based on Estimation of Camera Parameters from Image Sequence

    Jong-Il PARK  Nobuyuki YAGI  Kazumasa ENAMI  

     
    PAPER

      Vol:
    E77-D No:9
      Page(s):
    973-986

    This paper describes an image synthesis method based on an estimation of camera parameters. In order to acquire high quality images using image synthesis, we take some constraints into account, which include angle of view, synchronization of change of scale and change of viewing direction. The proposed method is based on an investigation that any camera operation containing a change of scale and a pure 3D rotation can be represented by a 2D geometric transformation. The transformation can explain all the synthesis procedure consisting of locating, synchronizing, and operating images. The procedure is described based on a virtual camera which is constituted of a virtual viewing point and a virtual image plain. The method can be efficiently implemented in such a way that each image to be synthesized undergoes the transformation only one time. The parameters in the image transformation are estimated from image sequence. The estimation scheme consists of first establishing correspondence and then estimating the parameters by fitting the correspondence data to the transformation model. We present experimental results and show the validity of the proposed method.

  • Calibration of Linear CCD Cameras Used in the Detection of the Position of the Light Spot

    Toyohiko HAYASHI  Rika KUSUMI  Michio MIYAKAWA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:8
      Page(s):
    912-918

    This paper presents a technique by which any linear CCD camera, be it one with lens distortions, or even one with misaligned lens and CCD, may be calibrated to obtain optimum performance characteristics. The camera-image formation model is described as a polynomial expression, which provides the line-of-sight flat-beam, including the target light-spot. The coefficients of the expression, which are referred to as camera parameters, can be estimated using the linear least-squares technique, in order to minimize the discrepancy between the reference points and the model-driven flat-beam. This technique requires, however, that a rough estimate of camera orientation, as well as a number of reference points, are provided. Experiments employing both computer simulations and actual CCD equipment certified that the model proposed can accurately describe the system, and that the parameter estimation is robust against noise.

141-154hit(154hit)