The search functionality is under construction.

Author Search Result

[Author] Manabu HASHIMOTO(3hit)

1-3hit
  • A Visual Inspection System Based on Trinarized Broad-Edge and Gray-Scale Hybrid Matching

    Haruhisa OKUDA  Manabu HASHIMOTO  Miwako HIROOKA  Kazuhiko SUMI  

     
    PAPER-Image Inspection

      Vol:
    E89-D No:7
      Page(s):
    2068-2075

    In the field of industrial manufacturing, visual pattern inspection is an important task to prevent the inclusion of incorrect parts. There have been demands for such methods able to handle factors caused by positional and rotational alignment, and illumination changes. In this paper, we propose a discrimination method called Trinarized broad-edge and Gray-scale Hybrid Matching (TGHM). The method is highly reliable due to gray-scale cross correlation which has a high pattern discrimination efficiency, with high-speed position and rotation alignment using the characteristics of trinarized broad-edge representation which has high data compressibility and illumination-resistant variability. In an example in which the method is applied to mis-collation inspection equipment of a bookbinding machine, it is confirmed that the processing speed is 24,000 sheets/hour, the error detection rate is 100.0%, and the mis-alarm rate is less than 0.002%, and it is verified that the method is practical.

  • Three-Level Broad-Edge Template Matching and Its Application to Real-Time Vision System

    Kazuhiko SUMI  Manabu HASHIMOTO  Haruhisa OKUDA  Shin'ichi KURODA  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1526-1532

    This paper presents a new internal image representation, in which the scene is encoded into a three-intensity-level image. This representation is generated by Laplacian-Gaussian filtering followed by dual-thresholding. We refer to this imege as three-level broad-edge representation. It supresses the high frequency noise and shading in the image and encodes the sign of relative intensity of a pixel compared with surrounding region. Image model search based on cross correlation using this representation is as reliable as the one based on gray normalized correlation, while it reduces the computational cost by 50 times. We examined the reliability and realtime performance of this method when it is applied to an industrial object recognition task. Our prototype system achieves 3232 image model search from the 128128 pixel area in 2 milli-seconds with a 9 MHz pixel clock image processor. This speed is fast enough for searching and tracking a single object at video frame rate.

  • Vision System for Depalletizing Robot Using Genetic Labeling

    Manabu HASHIMOTO  Kazuhiko SUMI  Shin'ichi KURODA  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1552-1558

    In this paper, we present a vision system for a depalletizing robot which recognizes carton objects. The algorithm consists of the extraction of object candidates and a labeling process to determine whether or not they actually exist. We consider this labeling a combinatorial optimization of labels, we propose a new labeling method applying Genetic Algorithm (GA). GA is an effective optimization method, but it has been inapplicable to real industrial systems because of its processing time and difficulty of finding the global optimum solution. We have solved these problems by using the following guidelines for designing GA: (1) encoding high-level information to chromosomes, such as the existence of object candidates; (2) proposing effective coding method and genetic operations based on the building block hypothesis; and (3) preparing a support procedure in the vision system for compensating for the mis-recognition caused by the pseudo optimum solution in labeling. Here, the hypothesis says that a better solution can be generated by combining parts of good solutions. In our problem, it is expected that a global desirable image interpretation can be obtained by combining subimages interpreted consistently. Through real image experiments, we have proven that the reliability of the vision system we have proposed is more than 98% and the recognition speed is 5 seconds/image, which is practical enough for the real-time robot task.