The search functionality is under construction.
The search functionality is under construction.

Class-Based N-Gram Language Model for New Words Using Out-of-Vocabulary to In-Vocabulary Similarity

Welly NAPTALI, Masatoshi TSUCHIYA, Seiichi NAKAGAWA

  • Full Text Views

    0

  • Cite this

Summary :

Out-of-vocabulary (OOV) words create serious problems for automatic speech recognition (ASR) systems. Not only are they miss-recognized as in-vocabulary (IV) words with similar phonetics, but the error also causes further errors in nearby words. Language models (LMs) for most open vocabulary ASR systems treat OOV words as a single entity, ignoring the linguistic information. In this paper we present a class-based n-gram LM that is able to deal with OOV words by treating each of them individually without retraining all the LM parameters. OOV words are assigned to IV classes consisting of similar semantic meanings for IV words. The World Wide Web is used to acquire additional data for finding the relation between the OOV and IV words. An evaluation based on adjusted perplexity and word-error-rate was carried out on the Wall Street Journal corpus. The result suggests the preference of the use of multiple classes for OOV words, instead of one unknown class.

Publication
IEICE TRANSACTIONS on Information Vol.E95-D No.9 pp.2308-2317
Publication Date
2012/09/01
Publicized
Online ISSN
1745-1361
DOI
10.1587/transinf.E95.D.2308
Type of Manuscript
PAPER
Category
Speech and Hearing

Authors

Keyword