The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Yonghong YAN(34hit)

1-20hit(34hit)

  • Effects of the Temporal Fine Structure in Different Frequency Bands on Mandarin Tone Perception

    Lin YANG  Jianping ZHANG  Jian SHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Vol:
    E91-D No:2
      Page(s):
    371-374

    This letter evaluates the relative contributions of temporal fine structure cues in various frequency bands to Mandarin tone perception using novel "auditory chimaeras". Our results confirm the importance of temporal fine structure cues to lexical tone perception and the dominant region of lexical tone perception is found, namely the second to fifth harmonics can contribute no less than the fundamental frequency itself.

  • Approximate Decision Function and Optimization for GMM-UBM Based Speaker Verification

    Xiang XIAO  Xiang ZHANG  Haipeng WANG  Hongbin SUO  Qingwei ZHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Vol:
    E92-D No:9
      Page(s):
    1798-1802

    The GMM-UBM framework has been proved to be one of the most effective approaches to the automatic speaker verification (ASV) task in recent years. In this letter, we first propose an approximate decision function of traditional GMM-UBM, from which it is shown that the contribution to classification of each Gaussian component is equally important. However, research in speaker perception shows that a different speech sound unit defined by Gaussian component makes a different contribution to speaker verification. This motivates us to emphasize some sound units which have discriminability between speakers while de-emphasize the speech sound units which contain little information for speaker verification. Experiments on 2006 NIST SRE core task show that the proposed approach outperforms traditional GMM-UBM approach in classification accuracy.

  • A Two-Stage Phase-Aware Approach for Monaural Multi-Talker Speech Separation

    Lu YIN  Junfeng LI  Yonghong YAN  Masato AKAGI  

     
    PAPER-Speech and Hearing

      Pubricized:
    2020/04/20
      Vol:
    E103-D No:7
      Page(s):
    1732-1743

    The simultaneous utterances impact the ability of both the hearing-impaired persons and automatic speech recognition systems. Recently, deep neural networks have dramatically improved the speech separation performance. However, most previous works only estimate the speech magnitude and use the mixture phase for speech reconstruction. The use of the mixture phase has become a critical limitation for separation performance. This study proposes a two-stage phase-aware approach for multi-talker speech separation, which integrally recovers the magnitude as well as the phase. For the phase recovery, Multiple Input Spectrogram Inversion (MISI) algorithm is utilized due to its effectiveness and simplicity. The study implements the MISI algorithm based on the mask and gives that the ideal amplitude mask (IAM) is the optimal mask for the mask-based MISI phase recovery, which brings less phase distortion. To compensate for the error of phase recovery and minimize the signal distortion, an advanced mask is proposed for the magnitude estimation. The IAM and the proposed mask are estimated at different stages to recover the phase and the magnitude, respectively. Two frameworks of neural network are evaluated for the magnitude estimation on the second stage, demonstrating the effectiveness and flexibility of the proposed approach. The experimental results demonstrate that the proposed approach significantly minimizes the distortions of the separated speech.

  • A Hybrid Speech Emotion Recognition System Based on Spectral and Prosodic Features

    Yu ZHOU  Junfeng LI  Yanqing SUN  Jianping ZHANG  Yonghong YAN  Masato AKAGI  

     
    PAPER-Human-computer Interaction

      Vol:
    E93-D No:10
      Page(s):
    2813-2821

    In this paper, we present a hybrid speech emotion recognition system exploiting both spectral and prosodic features in speech. For capturing the emotional information in the spectral domain, we propose a new spectral feature extraction method by applying a novel non-uniform subband processing, instead of the mel-frequency subbands used in Mel-Frequency Cepstral Coefficients (MFCC). For prosodic features, a set of features that are closely correlated with speech emotional states are selected. In the proposed hybrid emotion recognition system, due to the inherently different characteristics of these two kinds of features (e.g., data size), the newly extracted spectral features are modeled by Gaussian Mixture Model (GMM) and the selected prosodic features are modeled by Support Vector Machine (SVM). The final result of the proposed emotion recognition system is obtained by combining the results from these two subsystems. Experimental results show that (1) the proposed non-uniform spectral features are more effective than the traditional MFCC features for emotion recognition; (2) the proposed hybrid emotion recognition system using both spectral and prosodic features yields the relative recognition error reduction rate of 17.0% over the traditional recognition systems using only the spectral features, and 62.3% over those using only the prosodic features.

  • Discriminative Approach to Build Hybrid Vocabulary for Conversational Telephone Speech Recognition of Agglutinative Languages

    Xin LI  Jielin PAN  Qingwei ZHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Vol:
    E96-D No:11
      Page(s):
    2478-2482

    Morphemes, which are obtained from morphological parsing, and statistical sub-words, which are derived from data-driven splitting, are commonly used as the recognition units for speech recognition of agglutinative languages. In this letter, we propose a discriminative approach to select the splitting result, which is more likely to improve the recognizer's performance, for each distinct word type. An objective function which involves the unigram language model (LM) probability and the count of misrecognized phones on the acoustic training data is defined and minimized. After determining the splitting result for each word in the text corpus, we select the frequent units to build a hybrid vocabulary including morphemes and statistical sub-words. Compared to a statistical sub-word based system, the hybrid system achieves 0.8% letter error rates (LERs) reduction on the test set.

  • Development of a Mandarin-English Bilingual Speech Recognition System for Real World Music Retrieval

    Qingqing ZHANG  Jielin PAN  Yang LIN  Jian SHAO  Yonghong YAN  

     
    PAPER-Acoustic Modeling

      Vol:
    E91-D No:3
      Page(s):
    514-521

    In recent decades, there has been a great deal of research into the problem of bilingual speech recognition - to develop a recognizer that can handle inter- and intra-sentential language switching between two languages. This paper presents our recent work on the development of a grammar-constrained, Mandarin-English bilingual Speech Recognition System (MESRS) for real world music retrieval. Two of the main difficult issues in handling the bilingual speech recognition systems for real world applications are tackled in this paper. One is to balance the performance and the complexity of the bilingual speech recognition system; the other is to effectively deal with the matrix language accents in embedded language. In order to process the intra-sentential language switching and reduce the amount of data required to robustly estimate statistical models, a compact single set of bilingual acoustic models derived by phone set merging and clustering is developed instead of using two separate monolingual models for each language. In our study, a novel Two-pass phone clustering method based on Confusion Matrix (TCM) is presented and compared with the log-likelihood measure method. Experiments testify that TCM can achieve better performance. Since potential system users' native language is Mandarin which is regarded as a matrix language in our application, their pronunciations of English as the embedded language usually contain Mandarin accents. In order to deal with the matrix language accents in embedded language, different non-native adaptation approaches are investigated. Experiments show that model retraining method outperforms the other common adaptation methods such as Maximum A Posteriori (MAP). With the effective incorporation of approaches on phone clustering and non-native adaptation, the Phrase Error Rate (PER) of MESRS for English utterances was reduced by 24.47% relatively compared to the baseline monolingual English system while the PER on Mandarin utterances was comparable to that of the baseline monolingual Mandarin system. The performance for bilingual utterances achieved 22.37% relative PER reduction.

  • Using a Kind of Novel Phonotactic Information for SVM Based Speaker Recognition

    Xiang ZHANG  Hongbin SUO  Qingwei ZHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Vol:
    E92-D No:4
      Page(s):
    746-749

    In this letter, we propose a new approach to SVM based speaker recognition, which utilizes a kind of novel phonotactic information as the feature for SVM modeling. Gaussian mixture models (GMMs) have been proven extremely successful for text-independent speaker recognition. The GMM universal background model (UBM) is a speaker-independent model, each component of which can be considered as modeling some underlying phonetic sound classes. We assume that the utterances from different speakers should get different average posterior probabilities on the same Gaussian component of the UBM, and the supervector composed of the average posterior probabilities on all components of the UBM for each utterance should be discriminative. We use these supervectors as the features for SVM based speaker recognition. Experiment results on a NIST SRE 2006 task show that the proposed approach demonstrates comparable performance with the commonly used systems. Fusion results are also presented.

  • Automatic Singing Performance Evaluation for Untrained Singers

    Chuan CAO  Ming LI  Xiao WU  Hongbin SUO  Jian LIU  Yonghong YAN  

     
    LETTER-Music Information Processing

      Vol:
    E92-D No:8
      Page(s):
    1596-1600

    In this letter, we present an automatic approach of objective singing performance evaluation for untrained singers by relating acoustic measurements to perceptual ratings of singing voice quality. Several acoustic parameters and their combination features are investigated to find objective correspondences of the perceptual evaluation criteria. Experimental results show relative strong correlation between perceptual ratings and the combined features and the reliability of the proposed evaluation system is tested to be comparable to human judges.

  • A Novel Discriminative Method for Pronunciation Quality Assessment

    Junbo ZHANG  Fuping PAN  Bin DONG  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-Speech and Hearing

      Vol:
    E96-D No:5
      Page(s):
    1145-1151

    In this paper, we presented a novel method for automatic pronunciation quality assessment. Unlike the popular “Goodness of Pronunciation” (GOP) method, this method does not map the decoding confidence into pronunciation quality score, but differentiates the different pronunciation quality utterances directly. In this method, the student's utterance need to be decoded for two times. The first-time decoding was for getting the time points of each phone of the utterance by a forced alignment using a conventional trained acoustic model (AM). The second-time decoding was for differentiating the pronunciation quality for each triphone using a specially trained AM, where the triphones in different pronunciation qualities were trained as different units, and the model was trained in discriminative method to ensure the model has the best discrimination among the triphones whose names were same but pronunciation quality scores were different. The decoding network in the second-time decoding included different pronunciation quality triphones, so the phone-level scores can be obtained from the decoding result directly. The phone-level scores were combined into the sentence-level scores using maximum entropy criterion. The experimental results shows that the scoring performance was increased significantly compared to the GOP method, especially in sentence-level.

  • Factor Analysis of Neighborhood-Preserving Embedding for Speaker Verification

    Chunyan LIANG  Lin YANG  Qingwei ZHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Vol:
    E95-D No:10
      Page(s):
    2572-2576

    In this letter, we adopt a new factor analysis of neighborhood-preserving embedding (NPE) for speaker verification. NPE aims at preserving the local neighborhood structure on the data and defines a low-dimensional speaker space called neighborhood-preserving embedding space. We compare the proposed method with the state-of-the-art total variability approach on the telephone-telephone core condition of the NIST 2008 Speaker Recognition Evaluation (SRE) dataset. The experimental results indicate that the proposed NPE method outperforms the total variability approach, providing up to 24% relative improvement.

  • Speaker Recognition Using Sparse Probabilistic Linear Discriminant Analysis

    Hai YANG  Yunfei XU  Qinwei ZHAO  Ruohua ZHOU  Yonghong YAN  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1938-1945

    Sparse representation has been studied within the field of signal processing as a means of providing a compact form of signal representation. This paper introduces a sparse representation based framework named Sparse Probabilistic Linear Discriminant Analysis in speaker recognition. In this latent variable model, probabilistic linear discriminant analysis is modified to obtain an algorithm for learning overcomplete sparse representations by replacing the Gaussian prior on the factors with Laplace prior that encourages sparseness. For a given speaker signal, the dictionary obtained from this model has good representational power while supporting optimal discrimination of the classes. An expectation-maximization algorithm is derived to train the model with a variational approximation to a range of heavy-tailed distributions whose limit is the Laplace. The variational approximation is also used to compute the likelihood ratio score of all trials of speakers. This approach performed well on the core-extended conditions of the NIST 2010 Speaker Recognition Evaluation, and is competitive compared to the Gaussian Probabilistic Linear Discriminant Analysis, in terms of normalized Decision Cost Function and Equal Error Rate.

  • Noise Robust Feature Scheme for Automatic Speech Recognition Based on Auditory Perceptual Mechanisms

    Shang CAI  Yeming XIAO  Jielin PAN  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-Speech and Hearing

      Vol:
    E95-D No:6
      Page(s):
    1610-1618

    Mel Frequency Cepstral Coefficients (MFCC) are the most popular acoustic features used in automatic speech recognition (ASR), mainly because the coefficients capture the most useful information of the speech and fit well with the assumptions used in hidden Markov models. As is well known, MFCCs already employ several principles which have known counterparts in the peripheral properties of human hearing: decoupling across frequency, mel-warping of the frequency axis, log-compression of energy, etc. It is natural to introduce more mechanisms in the auditory periphery to improve the noise robustness of MFCC. In this paper, a k-nearest neighbors based frequency masking filter is proposed to reduce the audibility of spectra valleys which are sensitive to noise. Besides, Moore and Glasberg's critical band equivalent rectangular bandwidth (ERB) expression is utilized to determine the filter bandwidth. Furthermore, a new bandpass infinite impulse response (IIR) filter is proposed to imitate the temporal masking phenomenon of the human auditory system. These three auditory perceptual mechanisms are combined with the standard MFCC algorithm in order to investigate their effects on ASR performance, and a revised MFCC extraction scheme is presented. Recognition performances with the standard MFCC, RASTA perceptual linear prediction (RASTA-PLP) and the proposed feature extraction scheme are evaluated on a medium-vocabulary isolated-word recognition task and a more complex large vocabulary continuous speech recognition (LVCSR) task. Experimental results show that consistent robustness against background noise is achieved on these two tasks, and the proposed method outperforms both the standard MFCC and RASTA-PLP.

  • Policy Optimization for Spoken Dialog Management Using Genetic Algorithm

    Hang REN  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-Spoken dialog system

      Pubricized:
    2016/07/19
      Vol:
    E99-D No:10
      Page(s):
    2499-2507

    The optimization of spoken dialog management policies is a non-trivial task due to the erroneous inputs from speech recognition and language understanding modules. The dialog manager needs to ground uncertain semantic information at times to fully understand the need of human users and successfully complete the required dialog tasks. Approaches based on reinforcement learning are currently mainstream in academia and have been proved to be effective, especially when operating in noisy environments. However, in reinforcement learning the dialog strategy is often represented by complex numeric model and thus is incomprehensible to humans. The trained policies are very difficult for dialog system designers to verify or modify, which largely limits the deployment for commercial applications. In this paper we propose a novel framework for optimizing dialog policies specified in human-readable domain language using genetic algorithm. We present learning algorithms using user simulator and real human-machine dialog corpora. Empirical experimental results show that the proposed approach can achieve competitive performance on par with some state-of-the-art reinforcement learning algorithms, while maintaining a comprehensible policy structure.

  • Discriminative Pronunciation Modeling Using the MPE Criterion

    Meixu SONG  Jielin PAN  Qingwei ZHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Pubricized:
    2014/12/02
      Vol:
    E98-D No:3
      Page(s):
    717-720

    Introducing pronunciation models into decoding has been proven to be benefit to LVCSR. In this paper, a discriminative pronunciation modeling method is presented, within the framework of the Minimum Phone Error (MPE) training for HMM/GMM. In order to bring the pronunciation models into the MPE training, the auxiliary function is rewritten at word level and decomposes into two parts. One is for co-training the acoustic models, and the other is for discriminatively training the pronunciation models. On Mandarin conversational telephone speech recognition task, compared to the baseline using a canonical lexicon, the discriminative pronunciation models reduced the absolute Character Error Rate (CER) by 0.7% on LDC test set, and with the acoustic model co-training, 0.8% additional CER decrease had been achieved.

  • Speech Enhancement Using Improved Adaptive Null-Forming in Frequency Domain with Postfilter

    Heng ZHANG  Qiang FU  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Vol:
    E91-A No:12
      Page(s):
    3812-3816

    In this letter, a two channel frequency domain speech enhancement algorithm is proposed. The algorithm is designed to achieve better overall performance with relatively small array size. An improved version of adaptive null-forming is used, in which noise cancelation is implemented in auditory subbands. And an OM-LSA based postfiltering stage further purifies the output. The algorithm also features interaction between the array processing and the postfilter to make the filter adaptation more robust. This approach achieves considerable improvement on signal-to-noise ratio (SNR) and subjective quality of the desired speech. Experiments confirm the effectiveness of the proposed system.

  • Effective Acoustic Modeling for Pronunciation Quality Scoring of Strongly Accented Mandarin Speech

    Fengpei GE  Changliang LIU  Jian SHAO  Fuping PAN  Bin DONG  Yonghong YAN  

     
    PAPER-Speech and Hearing

      Vol:
    E91-D No:10
      Page(s):
    2485-2492

    In this paper we present our investigation into improving the performance of our computer-assisted language learning (CALL) system through exploiting the acoustic model and features within the speech recognition framework. First, to alleviate channel distortion, speaker-dependent cepstrum mean normalization (CMN) is adopted and the average correlation coefficient (average CC) between machine and expert scores is improved from 78.00% to 84.14%. Second, heteroscedastic linear discriminant analysis (HLDA) is adopted to enhance the discriminability of the acoustic model, which successfully increases the average CC from 84.14% to 84.62%. Additionally, HLDA causes the scoring accuracy to be more stable at various pronunciation proficiency levels, and thus leads to an increase in the speaker correct-rank rate from 85.59% to 90.99%. Finally, we use maximum a posteriori (MAP) estimation to tune the acoustic model to fit strongly accented test speech. As a result, the average CC is improved from 84.62% to 86.57%. These three novel techniques improve the accuracy of evaluating pronunciation quality.

  • Fuzzy Matching of Semantic Class in Chinese Spoken Language Understanding

    Yanling LI  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-Natural Language Processing

      Vol:
    E96-D No:8
      Page(s):
    1845-1852

    Semantic concept in an utterance is obtained by a fuzzy matching methods to solve problems such as words' variation induced by automatic speech recognition (ASR), or missing field of key information by users in the process of spoken language understanding (SLU). A two-stage method is proposed: first, we adopt conditional random field (CRF) for building probabilistic models to segment and label entity names from an input sentence. Second, fuzzy matching based on similarity function is conducted between the named entities labeled by a CRF model and the reference characters of a dictionary. The experiments compare the performances in terms of accuracy and processing speed. Dice similarity and cosine similarity based on TF score can achieve better accuracy performance among four similarity measures, which equal to and greater than 93% in F1-measure. Especially the latter one improved by 8.8% and 9% respectively compared to q-gram and improved edit-distance, which are two conventional methods for string fuzzy matching.

  • A One-Pass Real-Time Decoder Using Memory-Efficient State Network

    Jian SHAO  Ta LI  Qingqing ZHANG  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-ASR System Architecture

      Vol:
    E91-D No:3
      Page(s):
    529-537

    This paper presents our developed decoder which adopts the idea of statically optimizing part of the knowledge sources while handling the others dynamically. The lexicon, phonetic contexts and acoustic model are statically integrated to form a memory-efficient state network, while the language model (LM) is dynamically incorporated on the fly by means of extended tokens. The novelties of our approach for constructing the state network are (1) introducing two layers of dummy nodes to cluster the cross-word (CW) context dependent fan-in and fan-out triphones, (2) introducing a so-called "WI layer" to store the word identities and putting the nodes of this layer in the non-shared mid-part of the network, (3) optimizing the network at state level by a sufficient forward and backward node-merge process. The state network is organized as a multi-layer structure for distinct token propagation at each layer. By exploiting the characteristics of the state network, several techniques including LM look-ahead, LM cache and beam pruning are specially designed for search efficiency. Especially in beam pruning, a layer-dependent pruning method is proposed to further reduce the search space. The layer-dependent pruning takes account of the neck-like characteristics of WI layer and the reduced variety of word endings, which enables tighter beam without introducing much search errors. In addition, other techniques including LM compression, lattice-based bookkeeping and lattice garbage collection are also employed to reduce the memory requirements. Experiments are carried out on a Mandarin spontaneous speech recognition task where the decoder involves a trigram LM and CW triphone models. A comparison with HDecode of HTK toolkits shows that, within 1% performance deviation, our decoder can run 5 times faster with half of the memory footprint.

  • Automatic Language Identification with Discriminative Language Characterization Based on SVM

    Hongbin SUO  Ming LI  Ping LU  Yonghong YAN  

     
    PAPER-Language Identification

      Vol:
    E91-D No:3
      Page(s):
    567-575

    Robust automatic language identification (LID) is the task of identifying the language from a short utterance spoken by an unknown speaker. The mainstream approaches include parallel phone recognition language modeling (PPRLM), support vector machine (SVM) and the general Gaussian mixture models (GMMs). These systems map the cepstral features of spoken utterances into high level scores by classifiers. In this paper, in order to increase the dimension of the score vector and alleviate the inter-speaker variability within the same language, multiple data groups based on supervised speaker clustering are employed to generate the discriminative language characterization score vectors (DLCSV). The back-end SVM classifiers are used to model the probability distribution of each target language in the DLCSV space. Finally, the output scores of back-end classifiers are calibrated by a pair-wise posterior probability estimation (PPPE) algorithm. The proposed language identification frameworks are evaluated on 2003 NIST Language Recognition Evaluation (LRE) databases and the experiments show that the system described in this paper produces comparable results to the existing systems. Especially, the SVM framework achieves an equal error rate (EER) of 4.0% in the 30-second task and outperforms the state-of-art systems by more than 30% relative error reduction. Besides, the performances of proposed PPRLM and GMMs algorithms achieve an EER of 5.1% and 5.0% respectively.

  • A Hybrid Approach for Reverberation Simulation

    Risheng XIA  Junfeng LI  Andrea PRIMAVERA  Stefania CECCHI  Yôiti SUZUKI  Yonghong YAN  

     
    PAPER-Engineering Acoustics

      Vol:
    E98-A No:10
      Page(s):
    2101-2108

    Real-time high-quality reverberation simulation plays an important role in many modern applications, such as computer games and virtual reality. Traditional physically and perceptually-based reverberation simulation techniques, however, suffer from the high computational cost and lack of the specific characteristics of the enclosure. In this paper, a hybrid reverberation simulation approach is proposed in which early reflections are reproduced by convolving with the impulse response modeled by the image-source method and late reverberation is reproduced by the feedback delay network. A parametric predictor of energy decay relief is presented for the modeled early reflections and then exploited to compute the parameters of the feedback delay network. This ensures the smooth transition in the time-frequency domain from early to late reflections. Numerical simulation and listening tests validate the effectiveness of this proposed hybrid reverberation simulation approach.

1-20hit(34hit)