The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Masahiko YACHIDA(6hit)

1-6hit
  • Real-Time Estimation of Fast Egomotion with Feature Classification Using Compound Omnidirectional Vision Sensor

    Trung Thanh NGO  Yuichiro KOJIMA  Hajime NAGAHARA  Ryusuke SAGAWA  Yasuhiro MUKAIGAWA  Masahiko YACHIDA  Yasushi YAGI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E93-D No:1
      Page(s):
    152-166

    For fast egomotion of a camera, computing feature correspondence and motion parameters by global search becomes highly time-consuming. Therefore, the complexity of the estimation needs to be reduced for real-time applications. In this paper, we propose a compound omnidirectional vision sensor and an algorithm for estimating its fast egomotion. The proposed sensor has both multi-baselines and a large field of view (FOV). Our method uses the multi-baseline stereo vision capability to classify feature points as near or far features. After the classification, we can estimate the camera rotation and translation separately by using random sample consensus (RANSAC) to reduce the computational complexity. The large FOV also improves the robustness since the translation and rotation are clearly distinguished. To date, there has been no work on combining multi-baseline stereo with large FOV characteristics for estimation, even though these characteristics are individually are important in improving egomotion estimation. Experiments showed that the proposed method is robust and produces reasonable accuracy in real time for fast motion of the sensor.

  • Guidance of a Mobile Robot with Environmental Map Using Omnidirectional Image Sensor COPIS

    Yasushi YAGI  Yoshimitsu NISHIZAWA  Masahiko YACHIDA  

     
    PAPER

      Vol:
    E76-D No:4
      Page(s):
    486-493

    We have proposed a new omnidirectional image sensor COPIS (COnic Projection Image Sensor) for guiding navigation of a mobile robot. Its feature is passive sensing of the omnidirectional image of the environment in real-time (at the frame rate of a TV camera) using a conic mirror. COPIS is a suitable sensor for visual navigation in real world environment with moving objects. This paper describes a method for estimating the location and the motion of the robot by detecting the azimuth of each object in the omnidirectional image. In this method, the azimuth is matched with the given environmental map. The robot can always estimate its own location and motion precisely because COPIS observes a 360 degree view around the robot even if all edges are not extracted correctly from the omnidirectional image. We also present a method to avoid collision against unknown obstacles and estimate their locations by detecting their azimuth changes while the robot is moving in the environment. Using the COPIS system, we performed several experiments in the real world.

  • FOREWORD

    Masahiko YACHIDA  

     
    FOREWORD

      Vol:
    E76-D No:4
      Page(s):
    409-410
  • Calibration Method for Misaligned Catadioptric Camera

    Tomohiro MASHITA  Yoshio IWAI  Masahiko YACHIDA  

     
    PAPER-Camera Calibration

      Vol:
    E89-D No:7
      Page(s):
    1984-1993

    This paper proposes a calibration method for catadioptric camera systems consisting of a mirror whose reflecting surface is the surface of revolution and a perspective camera as typified by HyperOmni Vision. The proposed method is based on conventional camera calibration and mirror posture estimation. Many methods for camera calibration have been proposed and during the last decade, methods for catadioptric camera calibration have also been proposed. The main problem with catadioptric camera calibration is that the degree of freedom of mirror posture is limited or the accuracy of the estimated parameters is inadequate due to nonlinear optimization. On the other hand, our method can estimate five degrees of freedom of mirror posture and is free from the volatility of nonlinear optimization. The mirror posture has five degrees of freedom, because the mirror surface has a surface of revolution. Our method uses the mirror boundary and can estimate up to four mirror postures. We apply an extrinsic parameter calibration method based on conic fitting for this estimation method. Because an estimate of the mirror posture is not unique, we also propose a selection method for finding the best one. By using the conic-based analytical method we can avoid the initial value problem arising from nonlinear optimization. We conducted experiments on synthesized images and real images to evaluate the performance of our method, and discuss its accuracy.

  • Video Synthesis with High Spatio-Temporal Resolution Using Motion Compensation and Spectral Fusion

    Kiyotaka WATANABE  Yoshio IWAI  Hajime NAGAHARA  Masahiko YACHIDA  Toshiya SUZUKI  

     
    PAPER-Video Generation

      Vol:
    E89-D No:7
      Page(s):
    2186-2196

    We propose a novel strategy to obtain a high spatio-temporal resolution video. To this end, we introduce a dual sensor camera that can capture two video sequences with the same field of view simultaneously. These sequences record high resolution with low frame rate and low resolution with high frame rate. This paper presents an algorithm to synthesize a high spatio-temporal resolution video from these two video sequences by using motion compensation and spectral fusion. We confirm that the proposed method improves the resolution and frame rate of the synthesized video.

  • Integrated Person Identification and Expression Recognition from Facial Images

    Dadet PRAMADIHANTO  Yoshio IWAI  Masahiko YACHIDA  

     
    PAPER

      Vol:
    E84-D No:7
      Page(s):
    856-866

    In this paper we propose an integration of face identification and facial expression recognition. A face is modeled as a graph where the nodes represent facial feature points. This model is used for automatic face and facial feature point detection, and facial feature points tracked by applying flexible feature matching. Face identification is performed by comparing the graphs representing the input face image with individual face models. Facial expression is modeled by finding the relationship between the motion of facial feature points and expression change. Individual and average expression models are generated and then used to identify facial expressions under appropriate categories and the degree of expression changes. The expression model used for facial expression recognition is chosen by the results of face identification.