The search functionality is under construction.

Author Search Result

[Author] Tomoko MATSUI(7hit)

1-7hit
  • Automatic Generation of Non-uniform HMM Topologies Based on the MDL Criterion

    Takatoshi JITSUHIRO  Tomoko MATSUI  Satoshi NAKAMURA  

     
    PAPER-Speech and Hearing

      Vol:
    E87-D No:8
      Page(s):
    2121-2129

    We propose a new method to introduce the Minimum Description Length (MDL) criterion to the automatic generation of non-uniform, context-dependent HMM topologies. Phonetic decision tree clustering is widely used, based on the Maximum Likelihood (ML) criterion, and only creates contextual variations. However, the ML criterion needs to predetermine control parameters, such as the total number of states, empirically for use as stop criteria. Information criteria have been applied to solve this problem for decision tree clustering. However, decision tree clustering cannot create topologies with various state lengths automatically. Therefore, we propose a method that applies the MDL criterion as split and stop criteria to the Successive State Splitting (SSS) algorithm as a means of generating contextual and temporal variations. This proposed method, the MDL-SSS algorithm, can automatically create adequate topologies without such predetermined parameters. Experimental results for travel arrangement dialogs and lecture speech show that the MDL-SSS can automatically stop splitting and obtain more appropriate HMM topologies than the original one.

  • Robust Model for Speaker Verification against Session-Dependent Utterance Variation

    Tomoko MATSUI  Kiyoaki AIKAWA  

     
    PAPER-Speech and Hearing

      Vol:
    E86-D No:4
      Page(s):
    712-718

    This paper investigates a new method for creating robust speaker models to cope with inter-session variation of a speaker in a continuous HMM-based speaker verification system. The new method estimates session-independent parameters by decomposing inter-session variations into two distinct parts: session-dependent and -independent. The parameters of the speaker models are estimated using the speaker adaptive training algorithm in conjunction with the equalization of session-dependent variation. The resultant models capture the session-independent speaker characteristics more reliably than the conventional models and their discriminative power improves accordingly. Moreover we have made our models more invariant to handset variations in a public switched telephone network (PSTN) by focusing on session-dependent variation and handset-dependent distortion separately. Text-independent speech data recorded by 20 speakers in seven sessions over 16 months was used to evaluate the new approach. The proposed method reduces the error rate by 15% relatively. When compared with the popular cepstral mean normalization, the error rate is reduced by 24% relatively when the speaker models were recreated using speech data recorded in four or more sessions.

  • A Study on Acoustic Modeling of Pauses for Recognizing Noisy Conversational Speech

    Jin-Song ZHANG  Konstantin MARKOV  Tomoko MATSUI  Satoshi NAKAMURA  

     
    PAPER-Robust Speech Recognition and Enhancement

      Vol:
    E86-D No:3
      Page(s):
    489-496

    This paper presents a study on modeling inter-word pauses to improve the robustness of acoustic models for recognizing noisy conversational speech. When precise contextual modeling is used for pauses, the frequent appearances and varying acoustics of pauses in noisy conversational speech make it a problem to automatically generate an accurate phonetic transcription of the training data for developing robust acoustic models. This paper presents a proposal to exploit the reliable phonetic heuristics of pauses in speech to aid the detection of varying pauses. Based on it, a stepwise approach to optimize pause HMMs was applied to the data of the DARPA SPINE2 project, and more correct phonetic transcription was achieved. The cross-word triphone HMMs developed using this method got an absolute 9.2% word error reduction when compared to the conventional method with only context free modeling of pauses. For the same pause modeling method, the use of the optimized phonetic segmentation brought about an absolute 5.2% improvements.

  • Noise and Channel Distortion Robust ASR System for DARPA SPINE2 Task

    Konstantin MARKOV  Tomoko MATSUI  Rainer GRUHN  Jinsong ZHANG  Satoshi NAKAMURA  

     
    PAPER-Robust Speech Recognition and Enhancement

      Vol:
    E86-D No:3
      Page(s):
    497-504

    This paper presents the ATR speech recognition system designed for the DARPA SPINE2 evaluation task. The system is capable of dealing with speech from highly variable, real-world noisy conditions and communication channels. A number of robust techniques are implemented, such as differential spectrum mel-scale cepstrum features, on-line MLLR adaptation, and word-level hypothesis combination, which led to a significant reduction in the word error rate.

  • Comparative Study of Speaker Identification Methods: dPLRM, SVM and GMM

    Tomoko MATSUI  Kunio TANABE  

     
    PAPER-Speaker Recognition

      Vol:
    E89-D No:3
      Page(s):
    1066-1073

    A comparison of performances is made of three text-independent speaker identification methods based on dual Penalized Logistic Regression Machine (dPLRM), Support Vector Machine (SVM) and Gaussian Mixture Model (GMM) with experiments by 10 male speakers. The methods are compared for the speech data which were collected over the period of 13 months in 6 utterance-sessions of which the earlier 3 sessions were for obtaining training data of 12 seconds' utterances. Comparisons are made with the Mel-frequency cepstrum (MFC) data versus the log-power spectrum data and also with training data in a single session versus in plural ones. It is shown that dPLRM with the log-power spectrum data is competitive with SVM and GMM methods with MFC data, when trained for the combined data collected in the earlier three sessions. dPLRM outperforms GMM method especially as the amount of training data becomes smaller. Some of these findings have been already reported in [1]-[3].

  • Dialogue Speech Recognition by Combining Hierarchical Topic Classification and Language Model Switching

    Ian R. LANE  Tatsuya KAWAHARA  Tomoko MATSUI  Satoshi NAKAMURA  

     
    PAPER-Spoken Language Systems

      Vol:
    E88-D No:3
      Page(s):
    446-454

    An efficient, scalable speech recognition architecture combining topic detection and topic-dependent language modeling is proposed for multi-domain spoken language systems. In the proposed approach, the inferred topic is automatically detected from the user's utterance, and speech recognition is then performed by applying an appropriate topic-dependent language model. This approach enables users to freely switch between domains while maintaining high recognition accuracy. As topic detection is performed on a single utterance, detection errors may occur and propagate through the system. To improve robustness, a hierarchical back-off mechanism is introduced where detailed topic models are applied when topic detection is confident and wider models that cover multiple topics are applied in cases of uncertainty. The performance of the proposed architecture is evaluated when combined with two topic detection methods: unigram likelihood and SVMs (Support Vector Machines). On the ATR Basic Travel Expression Corpus, both methods provide a significant reduction in WER (9.7% and 10.3%, respectively) compared to a single language model system. Furthermore, recognition accuracy is comparable to performing decoding with all topic-dependent models in parallel, while the required computational cost is much reduced.

  • Verification of Multi-Class Recognition Decision: A Classification Approach

    Tomoko MATSUI  Frank K. SOONG  Biing-Hwang JUANG  

     
    PAPER-Spoken Language Systems

      Vol:
    E88-D No:3
      Page(s):
    455-462

    We investigate strategies to improve the utterance verification performance using a 2-class pattern classification approach, including: utilizing N-best candidate scores, modifying segmentation boundaries, applying background and out-of-vocabulary filler models, incorporating contexts, and minimizing verification errors via discriminative training. A connected-digit database recorded in a noisy, moving car with a hands-free microphone mounted on the sun-visor is used to evaluate the verification performance. The equal error rate (EER) of word verification is employed as the sole performance measure. All factors and their effects on the verification performance are presented in detail. The EER is reduced from 29%, using the standard likelihood ratio test, down to 21.4%, when all features are properly integrated.