1-5hit |
Akira SHINTANI Akio OGIHARA Yoshikazu YAMAGUCHI Yasuhisa HAYASHI Kunio FUKUNAGA
We propose two methods to fuse auditory information and visual information for accurate sppech recognition. The first method fuses two kinds of information by using linear combination after calculating two kinds of probabilities by HMM for each word. The second method fuses two kinds of information by using the histogram which expresses the correlation of them. We have performed experiments comparing the proposed methods with the conventional method and confirmed the validity of the proposed methods.
Naoshi DOI Akira SHINTANI Yasuhisa HAYASHI Akio OGIHARA Shinobu TAKAMATSU
Recently, some speech recognition methods using fusion of visual and auditory information have been researched. In this paper, a study on the mouth shape image suitable for fusion of visual and auditory information has been described. Features of mouth shape which are extracted from gray level image and binary image are adopted, and speech recognition using linear combination method has been performed. From results of speech recognition, the studies on the mouth shape features which are effective in fusion of visual and auditory information have been performed. And the effectiveness of using two kinds of mouth shape features also has been confirmed.
Yasuhisa HAYASHI Satoshi KONDO Nobuyuki TAKASU Akio OGIHARA Shojiro YONEDA
This study proposes a new training method for hidden Markov model with separate vector quantization (SVQ-HMM) in speech recognition. The proposed method uses the correlation of two different kinds of features: cepstrum and delta-cepstrum. The correlation is used to decrease the number of reestimation for two features thus the total computation time for training models decreases. The proposed method is applied to Japanese language isolated dgit recognition.
Yasuhisa HAYASHI Akio OGIHARA Kunio FUKUNAGA
We propose a recognition method for HMM using a simultaneous generative histogram. Proposed method uses the correlation between two features, which is expressed by a simultaneous generative histogram. Then output probabilities of integrated HMM are conditioned by the codeword of another feature. The proposed method is applied to isolated digit word recognition to confirm its validity.
Yoshikazu YAMAGUCHI Akio OGIHARA Yasuhisa HAYASHI Nobuyuki TAKASU Kunio FUKUNAGA
We propose a continuous speech recognition algorithm utilizing island-driven A* search. Conventional left-to-right A* search is probable to lose the optimal solution from a finite stack if some obscurities appear at the start of an input speech. Proposed island-driven A* search proceeds searching forward and backward from the clearest part of an input speech, and thus can avoid to lose the optimal solution from a finite stack.