The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] reverberant speech(3hit)

1-3hit
  • MTF-Based Kalman Filtering with Linear Prediction for Power Envelope Restoration in Noisy Reverberant Environments

    Yang LIU  Shota MORITA  Masashi UNOKI  

     
    PAPER-Digital Signal Processing

      Vol:
    E99-A No:2
      Page(s):
    560-569

    This paper proposes a method based on modulation transfer function (MTF) to restore the power envelope of noisy reverberant speech by using a Kalman filter with linear prediction (LP). Its advantage is that it can simultaneously suppress the effects of noise and reverberation by restoring the smeared MTF without measuring room impulse responses. This scheme has two processes: power envelope subtraction and power envelope inverse filtering. In the subtraction process, the statistical properties of observation noise and driving noise for power envelope are investigated for the criteria of the Kalman filter which requires noise to be white and Gaussian. Furthermore, LP coefficients drastically affect the Kalman filter performance, and a method is developed for deriving LP coefficients from noisy reverberant speech. In the dereverberation process, an inverse filtering method is applied to remove the effects of reverberation. Objective experiments were conducted under various noisy reverberant conditions to evaluate how well the proposed Kalman filtering method based on MTF improves the signal-to-error ratio (SER) and correlation between restored power envelopes compared with conventional methods. Results showed that the proposed Kalman filtering method based on MTF can improve SER and correlation more than conventional methods.

  • Recognizing Reverberant Speech Based on Amplitude and Frequency Modulation

    Yotaro KUBO  Shigeki OKAWA  Akira KUREMATSU  Katsuhiko SHIRAI  

     
    PAPER-ASR under Reverberant Conditions

      Vol:
    E91-D No:3
      Page(s):
    448-456

    We have attempted to recognize reverberant speech using a novel speech recognition system that depends on not only the spectral envelope and amplitude modulation but also frequency modulation. Most of the features used by modern speech recognition systems, such as MFCC, PLP, and TRAPS, are derived from the energy envelopes of narrowband signals by discarding the information in the carrier signals. However, some experiments show that apart from the spectral/time envelope and its modulation, the information on the zero-crossing points of the carrier signals also plays a significant role in human speech recognition. In realistic environments, a feature that depends on the limited properties of the signal may easily be corrupted. In order to utilize an automatic speech recognizer in an unknown environment, using the information obtained from other signal properties and combining them is important to minimize the effects of the environment. In this paper, we propose a method to analyze carrier signals that are discarded in most of the speech recognition systems. Our system consists of two nonlinear discriminant analyzers that use multilayer perceptrons. One of the nonlinear discriminant analyzers is HATS, which can capture the amplitude modulation of narrowband signals efficiently. The other nonlinear discriminant analyzer is a pseudo-instantaneous frequency analyzer proposed in this paper. This analyzer can capture the frequency modulation of narrowband signals efficiently. The combination of these two analyzers is performed by the method based on the entropy of the feature introduced by Okawa et al. In this paper, in Sect. 2, we first introduce pseudo-instantaneous frequencies to capture a property of the carrier signal. The previous AM analysis method are described in Sect. 3. The proposed system is described in Sect. 4. The experimental setup is presented in Sect. 5, and the results are discussed in Sect. 6. We evaluate the performance of the proposed method by continuous digit recognition of reverberant speech. The proposed system exhibits considerable improvement with regard to the MFCC feature extraction system.

  • Acoustic Model Adaptation Using First-Order Linear Prediction for Reverberant Speech

    Tetsuya TAKIGUCHI  Masafumi NISHIMURA  Yasuo ARIKI  

     
    PAPER-Speech Recognition

      Vol:
    E89-D No:3
      Page(s):
    908-914

    This paper describes a hands-free speech recognition technique based on acoustic model adaptation to reverberant speech. In hands-free speech recognition, the recognition accuracy is degraded by reverberation, since each segment of speech is affected by the reflection energy of the preceding segment. To compensate for the reflection signal we introduce a frame-by-frame adaptation method adding the reflection signal to the means of the acoustic model. The reflection signal is approximated by a first-order linear prediction from the observation signal at the preceding frame, and the linear prediction coefficient is estimated with a maximum likelihood method by using the EM algorithm, which maximizes the likelihood of the adaptation data. Its effectiveness is confirmed by word recognition experiments on reverberant speech.