The search functionality is under construction.

Author Search Result

[Author] Mutsumi WATANABE(3hit)

1-3hit
  • Sonar-Based Behaviors for a Behavior-Based Mobile Robot

    In So KWEON  Yoshinori KUNO  Mutsumi WATANABE  Kazunori ONOGUCHI  

     
    PAPER

      Vol:
    E76-D No:4
      Page(s):
    479-485

    We present a navigation system using ultrasonic sensors for unknown and dynamic indoor environments. To achieve the robustness and flexibility of the mobile robot, we develop a behavior-based system architecture, consisting of multi-layered behaviors. Basic behaviors required for the navigation of a mobile robot, such as, avoiding obstacles, moving towards free space, and following targets, are redundantly developed as agents and combined in a behavior-based system architecture. An extended potential filed method is developed to produce the appropriate velocity and steering commands for the behaviors of the robot. We demonstrate the capabilities of our system through real world experiments in unstructured dynamic office environments using an indoor mobile robot.

  • Adaptive Decomposition of Dynamic Scene into Object-Based Distribution Components Based on Mixture Model Framework

    Mutsumi WATANABE  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E88-D No:4
      Page(s):
    758-766

    This paper newly proposes a method to automatically decompose real scene images into multiple object-oriented component regions. First, histogram patterns of a specific image feature, such as intensity or hue value, are estimated from image sequence and stored up. Next, Gaussian distribution parameters which correspond to object components involved in the scene are estimated by applying the EM algorithm to the accumulated histogram. The number of the components is simultaneously estimated by evaluating the minimum value of Bayesian Information Criterion (BIC). This method can be applied to a variety of computer vision issues, for example, the color image segmentation and the recognition of scene situation transition. Experimental results applied for indoor and outdoor scenes showed the effectiveness of the proposed method.

  • Planar Projection Stereopsis Method for Road Extraction

    Kazunori ONOGUCHI  Nobuyuki TAKEDA  Mutsumi WATANABE  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:9
      Page(s):
    1006-1018

    This paper presents a method which can effectively acquire free space on a plane for moving forward in safety by using height information of objects. This method can be applied to free space extraction on a road, and, in short, it is a road extraction method for an autonomous vehicle. Since a road area can be assumed to be a sequence of flat planes in front of a vehicle, it is effective to apply the inverse perspective projection model to the ground plane. However, conventional methods using this model have a drawback in that some areas on the road plane are wrongly detected as obstacle areas since these methods are sensitive to the error of the camera geometry with respect to the assumed plane. In order to overcome this drawback, the proposed approach named the Planar Projection Stereopsis (PPS) method supplies, to the road extraction method using the inverse perspective projection model, a contrivance for removing these erroneous areas effectively. Since PPS uses the inverse perspective projection model, both left and right images are projected to the road plane and obstacle areas are detected by examining the difference between these projected images. Because detected obstacle areas include a lot of erroneous areas, PPS examines the shapes of the obstacle areas and eliminates falsely detected areas on the road plane by using the following properties: obstacles whose heights are different from the road plane are projected to the shapes falling backward from the location where the obstacles touch the road plane; and the length of shapes falling backward depends on the location of obstacles in relation to the stereoscopic cameras and the height of obstacles in relation to the road plane. Experimental results for real road scenes have shown the effectiveness of the proposed method. The quantitative evaluation of the results has shown that on average 89. 3% of the real road area can be extracted and the average of the falsely extracted ratio is 1. 4%. Since the road area can be extracted by simple projection of images and subtraction of projected images from a set of stereo images, our method can be applied to real-time operation.