The search functionality is under construction.
The search functionality is under construction.

Lip Location Normalized Training for Visual Speech Recognition

Oscar VANEGAS, Keiichi TOKUDA, Tadashi KITAMURA

  • Full Text Views

    0

  • Cite this

Summary :

This paper describes a method to normalize the lip position for improving the performance of a visual-information-based speech recognition system. Basically, there are two types of information useful in speech recognition processes; the first one is the speech signal itself and the second one is the visual information from the lips in motion. This paper tries to solve some problems caused by using images from the lips in motion such as the effect produced by the variation of the lip location. The proposed lip location normalization method is based on a search algorithm of the lip position in which the location normalization is integrated into the model training. Experiments of speaker-independent isolated word recognition were carried out on the Tulips1 and M2VTS databases. Experiments showed a recognition rate of 74.5% and an error reduction rate of 35.7% for the ten digits word recognition M2VTS database.

Publication
IEICE TRANSACTIONS on Information Vol.E83-D No.11 pp.1969-1977
Publication Date
2000/11/25
Publicized
Online ISSN
DOI
Type of Manuscript
PAPER
Category
Speech and Hearing

Authors

Keyword