The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Jianping ZHANG(2hit)

1-2hit
  • Effects of the Temporal Fine Structure in Different Frequency Bands on Mandarin Tone Perception

    Lin YANG  Jianping ZHANG  Jian SHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Vol:
    E91-D No:2
      Page(s):
    371-374

    This letter evaluates the relative contributions of temporal fine structure cues in various frequency bands to Mandarin tone perception using novel "auditory chimaeras". Our results confirm the importance of temporal fine structure cues to lexical tone perception and the dominant region of lexical tone perception is found, namely the second to fifth harmonics can contribute no less than the fundamental frequency itself.

  • A Hybrid Speech Emotion Recognition System Based on Spectral and Prosodic Features

    Yu ZHOU  Junfeng LI  Yanqing SUN  Jianping ZHANG  Yonghong YAN  Masato AKAGI  

     
    PAPER-Human-computer Interaction

      Vol:
    E93-D No:10
      Page(s):
    2813-2821

    In this paper, we present a hybrid speech emotion recognition system exploiting both spectral and prosodic features in speech. For capturing the emotional information in the spectral domain, we propose a new spectral feature extraction method by applying a novel non-uniform subband processing, instead of the mel-frequency subbands used in Mel-Frequency Cepstral Coefficients (MFCC). For prosodic features, a set of features that are closely correlated with speech emotional states are selected. In the proposed hybrid emotion recognition system, due to the inherently different characteristics of these two kinds of features (e.g., data size), the newly extracted spectral features are modeled by Gaussian Mixture Model (GMM) and the selected prosodic features are modeled by Support Vector Machine (SVM). The final result of the proposed emotion recognition system is obtained by combining the results from these two subsystems. Experimental results show that (1) the proposed non-uniform spectral features are more effective than the traditional MFCC features for emotion recognition; (2) the proposed hybrid emotion recognition system using both spectral and prosodic features yields the relative recognition error reduction rate of 17.0% over the traditional recognition systems using only the spectral features, and 62.3% over those using only the prosodic features.