The search functionality is under construction.

Author Search Result

[Author] Shinobu TAKAMATSU(6hit)

1-6hit
  • A Study on Mouth Shape Features Suitable for HMM Speech Recognition Using Fusion of Visual and Auditory Information

    Naoshi DOI  Akira SHINTANI  Yasuhisa HAYASHI  Akio OGIHARA  Shinobu TAKAMATSU  

     
    LETTER

      Vol:
    E78-A No:11
      Page(s):
    1548-1552

    Recently, some speech recognition methods using fusion of visual and auditory information have been researched. In this paper, a study on the mouth shape image suitable for fusion of visual and auditory information has been described. Features of mouth shape which are extracted from gray level image and binary image are adopted, and speech recognition using linear combination method has been performed. From results of speech recognition, the studies on the mouth shape features which are effective in fusion of visual and auditory information have been performed. And the effectiveness of using two kinds of mouth shape features also has been confirmed.

  • A Syntactic Analysis of English Sentences and the Information-Extraction

    Fujio NISHIDA  Shinobu TAKAMATSU  Yoneharu FUJITA  

     
    PAPER-Automata and Languages

      Vol:
    E60-E No:6
      Page(s):
    290-297

    This paper presents a method of extraction of the requested information from English sentences which express a certain fact or relations. The restricted input-English sentences are deterministically reduced using the context-free-type production rules presented here, which is constructed tentatively by introducing nonterminal symbol with subscript-variables. And the corresponding logical expressions are constructed. Extractions of requested informations are efficiently performed using the transformation rules between atomic and functional expressions.

  • Transformation between Informal Expressions and Formal Expressions in Program Specifications

    Shinobu TAKAMATSU  Fujio NISHIDA  

     
    PAPER-Software Systems

      Vol:
    E73-E No:5
      Page(s):
    729-737

    In recent years, various formal specification languages have been proposed for validity check of specifications and automatic program generation. However, they are rigorously constructed to be processed by machines and difficult to use. On the other hand, informal languages of conventional notations have been widely used in practice for program specifications but inconvenient to automatic processing by machines. This paper presents a method of transforming between informal specifications and formal specifications. A kind of limited English is introduced as an informal description language of specifications. Specifications written in the limited English are parsed based on a case grammar and transformed into the formal specifications. The formal specifications are also used for various automatic processing such as refinement and program generation. The refined formal specifications can be transformed into the limited English expressions which can be used as comments by the reverse process of the above transformation.

  • An Analysis on Minimum Searching Principle of Chaotic Neural Network

    Masaya OHTA  Kazumichi MATSUMIYA  Akio OGIHARA  Shinobu TAKAMATSU  Kunio FUKUNAGA  

     
    PAPER

      Vol:
    E79-A No:3
      Page(s):
    363-369

    This article analyzes dynamics of the chaotic neural network and minimum searching principle of this network. First it is indicated that the dynamics of the chaotic newral network is described like a gradient decent, and the chaotic neural network can roughly find out a local minimum point of a quadratic function using its attractor. Secondly It is guaranteed that the vertex corresponding a local minimum point derived from the chaotic neural network has a lower value of the objective function. Then it is confirmed that the chaotic neural network can escape an invalid local minimum and find out a reasonable one.

  • An Isolated Word Speech Recognition Using Fusion of Auditory and Visual Information

    Akira SHINTANI  Akiko OGIHARA  Naoshi DOI  Shinobu TAKAMATSU  

     
    PAPER

      Vol:
    E79-A No:6
      Page(s):
    777-783

    We propose a speech recognition method using fusion of auditory and visual information for accurate speech recognition. Since we use both auditory information and visual information, we can perform speech recognition more accurately in comparison with the case of either auditory information or visual information. After processing each information by HMM, they are fused by linear combination with weight coefficient. We performed experiments and confirmed the validity of the proposed method.

  • Speech Recognition Based on Fusion of Visual and Auditory Information Using Full-Framse Color Image

    Satoru IGAWA  Akio OGIHARA  Akira SHINTANI  Shinobu TAKAMATSU  

     
    LETTER

      Vol:
    E79-A No:11
      Page(s):
    1836-1840

    We propose a method to fuse auditory information and visual information for accurate speech recognition. This method fuses two kinds of information by using Iinear combination after calculating two kinds of probabilities by HMM for each word. In addition, we use full-frame color image as visual information in order to improve the accuracy of the proposed speech recognition system. We have performed experiments comparing the proposed method with the method using either auditory information or visual information, and confirmed the validity of the proposed method.