The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] pronunciation(11hit)

1-11hit
  • Articulatory Modeling for Pronunciation Error Detection without Non-Native Training Data Based on DNN Transfer Learning

    Richeng DUAN  Tatsuya KAWAHARA  Masatake DANTSUJI  Jinsong ZHANG  

     
    PAPER-Speech and Hearing

      Pubricized:
    2017/05/26
      Vol:
    E100-D No:9
      Page(s):
    2174-2182

    Aiming at detecting pronunciation errors produced by second language learners and providing corrective feedbacks related with articulation, we address effective articulatory models based on deep neural network (DNN). Articulatory attributes are defined for manner and place of articulation. In order to efficiently train these models of non-native speech without such data, which is difficult to collect in a large scale, several transfer learning based modeling methods are explored. We first investigate three closely-related secondary tasks which aim at effective learning of DNN articulatory models. We also propose to exploit large speech corpora of native and target language to model inter-language phenomena. This kind of transfer learning can provide a better feature representation of non-native speech. Related task transfer and language transfer learning are further combined on the network level. Compared with the conventional DNN which is used as the baseline, all proposed methods improved the performance. In the native attribute recognition task, the network-level combination method reduced the recognition error rate by more than 10% relative for all articulatory attributes. The method was also applied to pronunciation error detection in Mandarin Chinese pronunciation learning by Japanese native speakers, and achieved the relative improvement up to 17.0% for detection accuracy and up to 19.9% for F-score, which is also better than the lattice-based combination.

  • Discriminative Pronunciation Modeling Using the MPE Criterion

    Meixu SONG  Jielin PAN  Qingwei ZHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Pubricized:
    2014/12/02
      Vol:
    E98-D No:3
      Page(s):
    717-720

    Introducing pronunciation models into decoding has been proven to be benefit to LVCSR. In this paper, a discriminative pronunciation modeling method is presented, within the framework of the Minimum Phone Error (MPE) training for HMM/GMM. In order to bring the pronunciation models into the MPE training, the auxiliary function is rewritten at word level and decomposes into two parts. One is for co-training the acoustic models, and the other is for discriminatively training the pronunciation models. On Mandarin conversational telephone speech recognition task, compared to the baseline using a canonical lexicon, the discriminative pronunciation models reduced the absolute Character Error Rate (CER) by 0.7% on LDC test set, and with the acoustic model co-training, 0.8% additional CER decrease had been achieved.

  • A Novel Discriminative Method for Pronunciation Quality Assessment

    Junbo ZHANG  Fuping PAN  Bin DONG  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-Speech and Hearing

      Vol:
    E96-D No:5
      Page(s):
    1145-1151

    In this paper, we presented a novel method for automatic pronunciation quality assessment. Unlike the popular “Goodness of Pronunciation” (GOP) method, this method does not map the decoding confidence into pronunciation quality score, but differentiates the different pronunciation quality utterances directly. In this method, the student's utterance need to be decoded for two times. The first-time decoding was for getting the time points of each phone of the utterance by a forced alignment using a conventional trained acoustic model (AM). The second-time decoding was for differentiating the pronunciation quality for each triphone using a specially trained AM, where the triphones in different pronunciation qualities were trained as different units, and the model was trained in discriminative method to ensure the model has the best discrimination among the triphones whose names were same but pronunciation quality scores were different. The decoding network in the second-time decoding included different pronunciation quality triphones, so the phone-level scores can be obtained from the decoding result directly. The phone-level scores were combined into the sentence-level scores using maximum entropy criterion. The experimental results shows that the scoring performance was increased significantly compared to the GOP method, especially in sentence-level.

  • Regularized Maximum Likelihood Linear Regression Adaptation for Computer-Assisted Language Learning Systems

    Dean LUO  Yu QIAO  Nobuaki MINEMATSU  Keikichi HIROSE  

     
    PAPER-Educational Technology

      Vol:
    E94-D No:2
      Page(s):
    308-316

    This study focuses on speaker adaptation techniques for Computer-Assisted Language Learning (CALL). We first investigate the effects and problems of Maximum Likelihood Linear Regression (MLLR) speaker adaptation when used in pronunciation evaluation. Automatic scoring and error detection experiments are conducted on two publicly available databases of Japanese learners' English pronunciation. As we expected, over-adaptation causes misjudgment of pronunciation accuracy. Following the analysis, we propose a novel method, Regularized Maximum Likelihood Regression (Regularized-MLLR) adaptation, to solve the problem of the adverse effects of MLLR adaptation. This method uses a group of teachers' data to regularize learners' transformation matrices so that erroneous pronunciations will not be erroneously transformed as correct ones. We implement this idea in two ways: one is using the average of the teachers' transformation matrices as a constraint to MLLR, and the other is using linear combinations of the teachers' matrices to represent learners' transformations. Experimental results show that the proposed methods can better utilize MLLR adaptation and avoid over-adaptation.

  • A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    Yoo Rhee OH  Hong Kook KIM  

     
    PAPER-Adaptation

      Vol:
    E93-D No:9
      Page(s):
    2379-2387

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  • An LVCSR Based Reading Miscue Detection System Using Knowledge of Reference and Error Patterns

    Changliang LIU  Fuping PAN  Fengpei GE  Bin DONG  Hongbin SUO  Yonghong YAN  

     
    PAPER-Speech and Hearing

      Vol:
    E92-D No:9
      Page(s):
    1716-1724

    This paper describes a reading miscue detection system based on the conventional Large Vocabulary Continuous Speech Recognition (LVCSR) framework [1]. In order to incorporate the knowledge of reference (what the reader ought to read) and some error patterns into the decoding process, two methods are proposed: Dynamic Multiple Pronunciation Incorporation (DMPI) and Dynamic Interpolation of Language Model (DILM). DMPI dynamically adds some pronunciation variations into the search space to predict reading substitutions and insertions. To resolve the conflict between the coverage of error predications and the perplexity of the search space, only the pronunciation variants related to the reference are added. DILM dynamically interpolates the general language model based on the analysis of the reference and so keeps the active paths of decoding relatively near the reference. It makes the recognition more accurate, which further improves the detection performance. At the final stage of detection, an improved dynamic program (DP) is used to align the confusion network (CN) from speech recognition and the reference to generate the detecting result. The experimental results show that the proposed two methods can decrease the Equal Error Rate (EER) by 14% relatively, from 46.4% to 39.8%.

  • Morpheme-Based Modeling of Pronunciation Variation for Large Vocabulary Continuous Speech Recognition in Korean

    Kyong-Nim LEE  Minhwa CHUNG  

     
    PAPER-Speech and Hearing

      Vol:
    E90-D No:7
      Page(s):
    1063-1072

    This paper describes a morpheme-based pronunciation model that is especially useful to develop the pronunciation lexicon for Large Vocabulary Continuous Speech Recognition (LVCSR) in Korean. To address pronunciation variation in Korean, we analyze phonological rules based on phonemic contexts together with morphological category and morpheme boundary information. Since the same phoneme sequences can be pronounced in different ways at across morpheme boundary, incorporating morphological environment is required to manipulate pronunciation variation modeling. We implement a rule-based pronunciation variants generator to produce a pronunciation lexicon with context-dependent multiple variants. At the lexical level, we apply an explicit modeling of pronunciation variation to add pronunciation variants at across morphemes as well as within morpheme into the pronunciation lexicon. At the acoustic level, we train the phone models with re-labeled transcriptions through forced alignment using context-dependent pronunciation lexicon. The proposed pronunciation lexicon offers the potential benefit for both training and decoding of a LVCSR system. Subsequently, we perform the speech recognition experiment on read speech task with 34K-morpheme vocabulary. Experiment confirms that improved performance is achieved by pronunciation variation modeling based on morpho-phonological analysis.

  • Gemination of Consonant in Spontaneous Speech: An Analysis of the "Corpus of Spontaneous Japanese"

    Masako FUJIMOTO  Takayuki KAGOMIYA  

     
    PAPER-Speech Corpora and Related Topics

      Vol:
    E88-D No:3
      Page(s):
    562-568

    In Japanese, there is frequent alternation between CV morae and moraic geminate consonants. In this study, we analyzed the phonemic environments of consonant gemination (CG) using the "Corpus of Spontaneous Japanese (CSJ)." The results revealed that the environment in which gemination occurs is, to some extent, parallel to that of vowel devoicing. However, there are two crucial differences. One difference is that the CG tends to occur in a /kVk/ environment, whereas such is not the case for vowel devoicing. The second difference is that when the preceding consonant is /r/, gemination occurs, but not vowel devoicing. These observations suggest that the mechanism leading to CG differs from that which leads to vowel devoicing.

  • Multi-Modal Neural Networks for Symbolic Sequence Pattern Classification

    Hanxi ZHU  Ikuo YOSHIHARA  Kunihito YAMAMORI  Moritoshi YASUNAGA  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E87-D No:7
      Page(s):
    1943-1952

    We have developed Multi-modal Neural Networks (MNN) to improve the accuracy of symbolic sequence pattern classification. The basic structure of the MNN is composed of several sub-classifiers using neural networks and a decision unit. Two types of the MNN are proposed: a primary MNN and a twofold MNN. In the primary MNN, the sub-classifier is composed of a conventional three-layer neural network. The decision unit uses the majority decision to produce the final decisions from the outputs of the sub-classifiers. In the twofold MNN, the sub-classifier is composed of the primary MNN for partial classification. The decision unit uses a three-layer neural network to produce the final decisions. In the latter type of the MNN, since the structure of the primary MNN is folded into the sub-classifier, the basic structure of the MNN is used twice, which is the reason why we call the method twofold MNN. The MNN is validated with two benchmark tests: EPR (English Pronunciation Reasoning) and prediction of protein secondary structure. The reasoning accuracy of EPR is improved from 85.4% by using a three-layer neural network to 87.7% by using the primary MNN. In the prediction of protein secondary structure, the average accuracy is improved from 69.1% of a three-layer neural network to 74.6% by the primary MNN and 75.6% by the twofold MNN. The prediction test is based on a database of 126 non-homologous protein sequences.

  • A Statistical Method of Evaluating Pronunciation Proficiency for English Words Spoken by Japanese

    Seiichi NAKAGAWA  Naoki NAKAMURA  Kazumasa MORI  

     
    PAPER-Speech and Hearing

      Vol:
    E87-D No:7
      Page(s):
    1917-1922

    In this paper, we propose a statistical method of evaluating the pronunciation proficiency of English words spoken by Japanese. We analyzed statistically the utterances to note a combination that has a high correlation between an English teacher's score and certain acoustic features. We obserbed that the phoneme recognition rates (correct rate and accuracy) were the best measure of pronunciation proficiency, and the likelihood ratio of English phoneme acoustic models to phoneme acoustic models adapted by Japanese was the second best measure. The effective measure which was highly correlated with the English teacher's score was the combination of the likelihood for American native models, likelihood for English models adapted by Japanese, the best likelihood for arbitrary sequences of acoustic models, phoneme recognition rate and the rate of speech. We obtained a correlation coefficient of 0.81 with an open data for vocabulary and 0.69 with open data for speaker at the five words set level, respectively. The coefficient was higher than the correlation between humans' scores, 0.65. In the 15 words set level which corresponds to one or two sentences, we obtained the correlation coefficient of 0.86 with open data for the speaker.

  • Automatic Evaluation of English Pronunciation Based on Speech Recognition Techniques

    Hiroshi HAMADA  Satoshi MIKI  Ryohei NAKATSU  

     
    PAPER-Speech Processing

      Vol:
    E76-D No:3
      Page(s):
    352-359

    A new method is proposed for automatically evaluating the English pronunciation quality of non-native speakers. It is assumed that pronunciation can be rated using three criteria: the static characteristics of phonetic spectra, the dynamic structure of spectrum sequences, and the prosodic characteristics of utterances. The evaluation uses speech recognition techniques to compare the English words pronounced by a non-native speaker with those pronounced by a native speaker. Three evaluation measures are proposed to rate pronunciation quality. (1) The standard deviation of the mapping vectors, which map the codebook vectors of the non-native speaker onto the vector space of the native speaker, is used to evaluate the static phonetic spectra characteristics. (2) The spectral distance between words pronounced by the non-native speaker and those pronounced by the native speaker obtained by the DTW method is used to evaluate the dynamic characteristics of spectral sequences. (3) The differences in fundamental frequency and speech power between the pronunciation of the native and non-native speaker are used as the criteria for evaluating prosodic characteristics. Evaluation experiments are carried out using 441 words spoken by 10 Japanese speakers and 10 native speakers. One half of the 441 words was used to evaluate static phonetic spectra characteristics, and the other half was used to evaluate the dynamic characteristics of spectral sequences, as well as the prosodic characteristics. Based on the experimental results, the correlation between the evaluation scores and the scores determined by human judgement is found to be 0.90.