The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Yasuaki NAKANO(3hit)

1-3hit
  • Cursive Handwritten Word Recognition Using Multiple Segmentation Determined by Contour Analysis

    Hirobumi YAMADA  Yasuaki NAKANO  

     
    PAPER-Word Recognition

      Vol:
    E79-D No:5
      Page(s):
    464-470

    This paper proposes a method for cursive handwritten word recognition. Cursive word recognition generally consists of segmentation of a cursive word, character recognition and word recognition. Traditional approaches detect one candidate of segmentation point between characters, and cut the touching characters at the point [1]. But, it is difficult to detect a correct segmentation point between characters in cursive word, because form of touching characters varies greatly by cases. In this research, we determine multiple candidates as segmentation points between characters. Character recognition and word recognition decide which candidate is the most plausible touching point. As a result of the experiment, at the character recognition stage, recognition rate was 75.7%, while cumulative recognition rate within best three candidates was 93.7%. In word recognition, recognition rate was 79.8%, while cumulative recognition rate within best five candidates was 91.7% when lexicon size is 50. The processing speed is about 30 sec/word on SPARC station 5.

  • A High Speed Contour Fill Method for Character Image Generation

    Kazuki NAKASHIMA  Masashi KOGA  Katsumi MARUKAWA  Yoshihiro SHIMA  Yasuaki NAKANO  

     
    PAPER

      Vol:
    E77-D No:7
      Page(s):
    832-838

    This paper proposes a new, high-speed method of filling in the contours of alpha-numeric characters to produce correct binary image patterns. We call this method the improved edge-fill method because it improves on a previously developed edge-fill method. Ambiguity of the conventional edge-fill method on binary images are eliminated by selecting fill pixels from combinations of Freeman's chain code, which expresses contour lines. Consequently, the areas inside the contour lines are filled in rapidly and correctly. With the new method, the processing time for character image generation is reduced by ten to tewnty percent over the conventional method. The effectiveness of the new method is examined in experiments using both Arabic numerals and letters from the Roman alphabet. Results show that this fill method is able to produce correct image patterns and that it can be applied to alpha-numeric-character contour filling.

  • Note Symbol Extraction for Printed Piano Scores Using Neural Networks*

    Hidetoshi MIYAO  Yasuaki NAKANO  

     
    PAPER-Document Recognition and Analysis

      Vol:
    E79-D No:5
      Page(s):
    548-554

    In the traditional note symbol extraction processes, extracted candidates of note elements were identified using complex if-then rules based on the note formation rules and they needed subtle adjustment of parameters through many experiments. The purpose of our system is to avoid the tedious tasks and to present an accurate and high-speed extraction of note heads, stems and flags according to the following procedure. (1) We extract head and flag candidates based on the stem positions. (2) To identify heads and flags from the candidates, we use a couple of three-layer neural networks. To make the networks learn, we give the position informations and reliability factors of candidates to the input units. (3) With the weights learned by the net, the head and flag candidates are recognized. As an experimental result, we obtained a high extraction rate of more than 99% for thirteen printed piano scores on A4 sheet which have various difficulties. Using a workstation (SPARC Station 10), it took about 90 seconds to do on the average. It means that our system can analyze piano scores 5 times or more as fast as the manual work. Therefore, our system can execute the task without the traditional tedious works, and can recognize them quickly and accurately.