The search functionality is under construction.

Keyword Search Result

[Keyword] spectral subtraction(25hit)

1-20hit(25hit)

  • An Improved Perceptual MBSS Noise Reduction with an SNR-Based VAD for a Fully Operational Digital Hearing Aid

    Zhaoyang GUO  Xin'an WANG  Bo WANG  Shanshan YONG  

     
    PAPER-Speech and Hearing

      Pubricized:
    2017/02/17
      Vol:
    E100-D No:5
      Page(s):
    1087-1096

    This paper first reviews the state-of-the-art noise reduction methods and points out their vulnerability in noise reduction performance and speech quality, especially under the low signal-noise ratios (SNR) environments. Then this paper presents an improved perceptual multiband spectral subtraction (MBSS) noise reduction algorithm (NRA) and a novel robust voice activity detection (VAD) based on the amended sub-band SNR. The proposed SNR-based VAD can considerably increase the accuracy of discrimination between noise and speech frame. The simulation results show that the proposed NRA has better segmental SNR (segSNR) and perceptual evaluation of speech quality (PESQ) performance than other noise reduction algorithms especially under low SNR environments. In addition, a fully operational digital hearing aid chip is designed and fabricated in the 0.13 µm CMOS process based on the proposed NRA. The final chip implementation shows that the whole chip dissipates 1.3 mA at the 1.2 V operation. The acoustic test result shows that the maximum output sound pressure level (OSPL) is 114.6 dB SPL, the equivalent input noise is 5.9 dB SPL, and the total harmonic distortion is 2.5%. So the proposed digital hearing aid chip is a promising candidate for high performance hearing-aid systems.

  • A Hybrid Approach to Electrolaryngeal Speech Enhancement Based on Noise Reduction and Statistical Excitation Generation

    Kou TANAKA  Tomoki TODA  Graham NEUBIG  Sakriani SAKTI  Satoshi NAKAMURA  

     
    PAPER-Voice Conversion and Speech Enhancement

      Vol:
    E97-D No:6
      Page(s):
    1429-1437

    This paper presents an electrolaryngeal (EL) speech enhancement method capable of significantly improving naturalness of EL speech while causing no degradation in its intelligibility. An electrolarynx is an external device that artificially generates excitation sounds to enable laryngectomees to produce EL speech. Although proficient laryngectomees can produce quite intelligible EL speech, it sounds very unnatural due to the mechanical excitation produced by the device. Moreover, the excitation sounds produced by the device often leak outside, adding to EL speech as noise. To address these issues, there are mainly two conventional approached to EL speech enhancement through either noise reduction or statistical voice conversion (VC). The former approach usually causes no degradation in intelligibility but yields only small improvements in naturalness as the mechanical excitation sounds remain essentially unchanged. On the other hand, the latter approach significantly improves naturalness of EL speech using spectral and excitation parameters of natural voices converted from acoustic parameters of EL speech, but it usually causes degradation in intelligibility owing to errors in conversion. We propose a hybrid approach using a noise reduction method for enhancing spectral parameters and statistical voice conversion method for predicting excitation parameters. Moreover, we further modify the prediction process of the excitation parameters to improve its prediction accuracy and reduce adverse effects caused by unvoiced/voiced prediction errors. The experimental results demonstrate the proposed method yields significant improvements in naturalness compared with EL speech while keeping intelligibility high enough.

  • Spectral Subtraction Based on Non-extensive Statistics for Speech Recognition

    Hilman PARDEDE  Koji IWANO  Koichi SHINODA  

     
    PAPER-Speech and Hearing

      Vol:
    E96-D No:8
      Page(s):
    1774-1782

    Spectral subtraction (SS) is an additive noise removal method which is derived in an extensive framework. In spectral subtraction, it is assumed that speech and noise spectra follow Gaussian distributions and are independent with each other. Hence, noisy speech also follows a Gaussian distribution. Spectral subtraction formula is obtained by maximizing the likelihood of noisy speech distribution with respect to its variance. However, it is well known that noisy speech observed in real situations often follows a heavy-tailed distribution, not a Gaussian distribution. In this paper, we introduce a q-Gaussian distribution in the non-extensive statistics to represent the distribution of noisy speech and derive a new spectral subtraction method based on it. We found that the q-Gaussian distribution fits the noisy speech distribution better than the Gaussian distribution does. Our speech recognition experiments using the Aurora-2 database showed that the proposed method, q-spectral subtraction (q-SS), outperformed the conventional SS method.

  • Distant-Talking Speech Recognition Based on Spectral Subtraction by Multi-Channel LMS Algorithm

    Longbiao WANG  Norihide KITAOKA  Seiichi NAKAGAWA  

     
    PAPER-Speech and Hearing

      Vol:
    E94-D No:3
      Page(s):
    659-667

    We propose a blind dereverberation method based on spectral subtraction using a multi-channel least mean squares (MCLMS) algorithm for distant-talking speech recognition. In a distant-talking environment, the channel impulse response is longer than the short-term spectral analysis window. By treating the late reverberation as additive noise, a noise reduction technique based on spectral subtraction was proposed to estimate the power spectrum of the clean speech using power spectra of the distorted speech and the unknown impulse responses. To estimate the power spectra of the impulse responses, a variable step-size unconstrained MCLMS (VSS-UMCLMS) algorithm for identifying the impulse responses in a time domain is extended to a frequency domain. To reduce the effect of the estimation error of the channel impulse response, we normalize the early reverberation by cepstral mean normalization (CMN) instead of spectral subtraction using the estimated impulse response. Furthermore, our proposed method is combined with conventional delay-and-sum beamforming. We conducted recognition experiments on a distorted speech signal simulated by convolving multi-channel impulse responses with clean speech. The proposed method achieved a relative error reduction rate of 22.4% in relation to conventional CMN. By combining the proposed method with beamforming, a relative error reduction rate of 24.5% in relation to the conventional CMN with beamforming was achieved using only an isolated word (with duration of about 0.6 s) to estimate the spectrum of the impulse response.

  • A Subtractive-Type Speech Enhancement Using the Perceptual Frequency-Weighting Function

    Seiji HAYASHI  Hiroyuki INUKAI  Masahiro SUGUIMOTO  

     
    PAPER-Speech and Hearing

      Vol:
    E92-A No:1
      Page(s):
    226-234

    The present paper describes quality enhancement of speech corrupted by an additive background noise in a single-channel system. The proposed approach is based on the introduction of a perceptual criterion using a frequency-weighting filter in a subtractive-type enhancement process. Although this subtractive-type method is very attractive because of its simplicity, it produces an unnatural and unpleasant residual noise. Thus, it is difficult to select fixed optimized parameters for all speech and noise conditions. A new and effective algorithm is thus developed based on the masking properties of the human ear. This newly developed algorithm allows for an automatic adaptation in the time and frequency of the enhancement system and determines a suitable noise estimate according to the frequency of the noisy input speech. Experimental results demonstrate that the proposed approach can efficiently remove additive noise related to various kinds of noise corruption.

  • Robust Speech Recognition by Model Adaptation and Normalization Using Pre-Observed Noise

    Satoshi KOBASHIKAWA  Satoshi TAKAHASHI  

     
    PAPER-Noisy Speech Recognition

      Vol:
    E91-D No:3
      Page(s):
    422-429

    Users require speech recognition systems that offer rapid response and high accuracy concurrently. Speech recognition accuracy is degraded by additive noise, imposed by ambient noise, and convolutional noise, created by space transfer characteristics, especially in distant talking situations. Against each type of noise, existing model adaptation techniques achieve robustness by using HMM-composition and CMN (cepstral mean normalization). Since they need an additive noise sample as well as a user speech sample to generate the models required, they can not achieve rapid response, though it may be possible to catch just the additive noise in a previous step. In the previous step, the technique proposed herein uses just the additive noise to generate an adapted and normalized model against both types of noise. When the user's speech sample is captured, only online-CMN need be performed to start the recognition processing, so the technique offers rapid response. In addition, to cover the unpredictable S/N values possible in real applications, the technique creates several S/N HMMs. Simulations using artificial speech data show that the proposed technique increased the character correct rate by 11.62% compared to CMN.

  • Ears of the Robot: Three Simultaneous Speech Segregation and Recognition Using Robot-Mounted Microphones

    Naoya MOCHIKI  Tetsuji OGAWA  Tetsunori KOBAYASHI  

     
    LETTER-Speech and Hearing

      Vol:
    E90-D No:9
      Page(s):
    1465-1468

    A new type of sound source segregation method using robot-mounted microphones, which are free from strict head related transfer function (HRTF) estimation, has been proposed and successfully applied to three simultaneous speech recognition systems. The proposed segregation method is executed with sound intensity differences that are due to the particular arrangement of the four directivity microphones and the existence of a robot head acting as a sound barrier. The proposed method consists of three-layered signal processing: two-line SAFIA (binary masking based on the narrow band sound intensity comparison), two-line spectral subtraction and their integration. We performed 20 K vocabulary continuous speech recognition test in the presence of three speakers' simultaneous talk, and achieved more than 70% word error reduction compared with the case without any segregation processing.

  • An Efficient Speech Enhancement Algorithm for Digital Hearing Aids Based on Modified Spectral Subtraction and Companding

    Young Woo LEE  Sang Min LEE  Yoon Sang JI  Jong Shill LEE  Young Joon CHEE  Sung Hwa HONG  Sun I. KIM  In Young KIM  

     
    PAPER-Speech and Hearing

      Vol:
    E90-A No:8
      Page(s):
    1628-1635

    Digital hearing aid users often complain of difficulty in understanding speech in the presence of background noise. To improve speech perception in a noisy environment, various speech enhancement algorithms have been applied in digital hearing aids. In this study, a speech enhancement algorithm using modified spectral subtraction and companding is proposed for digital hearing aids. We adjusted the biases of the estimated noise spectrum, based on a subtraction factor, to decrease the residual noise. Companding was applied to the channel of the formant frequency based on the speech presence indicator to enhance the formant. Noise suppression was achieved while retaining weak speech components and avoiding the residual noise phenomena. Objective and subjective evaluation under various environmental conditions confirmed the improvement due to the proposed algorithm. We tested segmental SNR and Log Likelihood Ratio (LLR), which have higher correlation with subjective measures. Segmental SNR has the highest and LLR the lowest correlation of the methods tested. In addition, we confirmed by spectrogram that the proposed method significantly reduced the residual noise and enhanced the formants. A mean opinion score that represented the global perception score was tested; this produced the highest quality speech using the proposed method. The results show that the proposed speech enhancement algorithm is beneficial for hearing aid users in noisy environments.

  • Speech Enhancement by Overweighting Gain with Nonlinear Structure in Wavelet Packet Transform

    Sung-il JUNG  Younghun KWON  Sung-il YANG  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E90-B No:8
      Page(s):
    2147-2150

    A speech enhancement method is proposed that can be implemented efficiently due to its use of wavelet packet transform. The proposed method uses a modified spectral subtraction with noise estimation by a least-squares line method and with an overweighting gain per subband with nonlinear structure, where the overweighting gain is used for suppressing the residue of musical noise and the subband is used for applying the weighted values according to the change of signals. The enhanced speech by our method has the following properties: 1) the speech intelligibility can be assured reliably; 2) the musical noise can be reduced efficiently. Various assessments confirmed that the performance of the proposed method was better than that of the compared methods in various noise-level conditions. Especially, the proposed method showed good results even at low SNR.

  • Single Channel Speech Enhancement Based on Perceptual Frequency-Weighting

    Seiji HAYASHI  Masahiro SUGUIMOTO  

     
    LETTER-Speech and Hearing

      Vol:
    E90-D No:6
      Page(s):
    998-1001

    The present paper describes a quality enhancement of speech corrupted by additive background noise in a single channel system. The proposed approach is based on the introduction of perceptual criteria using a frequency-weighting filter in a subtractive-type enhancement process. This newly developed algorithm allows for an automatic adaptation in the time and frequency of the enhancement system and finds a suitable noise estimate according to the frequency of the corrupted speech. Experimental results show that the proposed approach can efficiently remove additive noise related to various types of noise corruption.

  • Statistical Model-Based VAD Algorithm with Wavelet Transform

    Yoon-Chang LEE  Sang-Sik AHN  

     
    PAPER

      Vol:
    E89-A No:6
      Page(s):
    1594-1600

    This paper presents a new statistical model-based voice activity detection (VAD) algorithm in the wavelet domain to improve the performance in non-stationary environments. Due to the efficient time-frequency localization and the multi-resolution characteristics of the wavelet representations, the wavelet transforms are quite suitable for processing non-stationary signals such as speech. To utilize the fact that the wavelet packet is very efficient approximation of discrete Fourier transform and has built-in de-noising capability, we first apply wavelet packet decomposition to effectively localize the energy in frequency space, use spectral subtraction, and employ matched filtering to enhance the SNR. Since the conventional wavelet-based spectral subtraction eliminates the low-power speech signal in onset and offset regions and generates musical noise, we derive an improved multi-band spectral subtraction. On the other hand, noticing that fixed threshold cannot follow fluctuations of time varying noise power and the inability to adapt to a time-varying environment severely limits the VAD performance, we propose a statistical model-based VAD algorithm in wavelet domain with an adaptive threshold. We perform extensive computer simulations and compare with the conventional algorithms to demonstrate performance improvement of the proposed algorithm under various noise environments.

  • Dithered Subband Coding with Spectral Subtraction

    Chatree BUDSABATHON  Akinori NISHIHARA  

     
    PAPER-Digital Signal Processing

      Vol:
    E89-A No:6
      Page(s):
    1788-1793

    In this paper, we propose a combination-based novel technique of dithered subband coding with spectral subtraction for improving the perceptual quality of coded audio at low bit rates. It is well known that signal-correlated distortion is audible when the audio signal is quantized at bit rates lower than the lower bound of perceptual coding. We show that this problem can be overcome by applying the dithering quantization process in each subband. Consequently, the quantization noise is rendered into a signal-independent white noise; this noise is then estimated and removed by spectral subtraction at the decoder. Experimental results show an effective improvement by the proposed method over the conventional one in terms of better SNR and human listening test results. The proposed method can be combined with other existing or future coding methods such as perceptual coding to improve their performance at low bit rates.

  • Simultaneous Adaptation of Echo Cancellation and Spectral Subtraction for In-Car Speech Recognition

    Osamu ICHIKAWA  Masafumi NISHIMURA  

     
    PAPER-Speech Enhancement

      Vol:
    E88-A No:7
      Page(s):
    1732-1738

    Recently, automatic speech recognition in a car has practical uses for applications like car-navigation and hands-free telephone dialers. For noise robustness, the current successes are based on the assumption that there is only a stationary cruising noise. Therefore, the recognition rate is greatly reduced when there is music or news coming from a radio or a CD player in the car. Since reference signals are available from such in-vehicle units, there is great hope that echo cancellers can eliminate the echo component in the observed noisy signals. However, previous research reported that the performance of an echo canceller is degraded in very noisy conditions. This implies it is desirable to combine the processes of echo cancellation and noise reduction. In this paper, we propose a system that uses echo cancellation and spectral subtraction simultaneously. A stationary noise component for spectral subtraction is estimated through the adaptation of an echo canceller. In our experiments, this system significantly reduced the errors in automatic speech recognition compared with the conventional combination of echo cancellation and spectral subtraction.

  • Speech Enhancement by Spectral Subtraction Based on Subspace Decomposition

    Takahiro MURAKAMI  Tetsuya HOYA  Yoshihisa ISHIDA  

     
    PAPER-Speech and Hearing

      Vol:
    E88-A No:3
      Page(s):
    690-701

    This paper presents a novel algorithm for spectral subtraction (SS). The method is derived from a relation between the spectrum obtained by the discrete Fourier transform (DFT) and that by a subspace decomposition method. By using the relation, it is shown that a noise reduction algorithm based on subspace decomposition is led to an SS method in which noise components in an observed signal are eliminated by subtracting variance of noise process in the frequency domain. Moreover, it is shown that the method can significantly reduce computational complexity in comparison with the method based on the standard subspace decomposition. In a similar manner to the conventional SS methods, our method also exploits the variance of noise process estimated from a preceding segment where speech is absent, whereas the noise is present. In order to more reliably detect such non-speech segments, a novel robust voice activity detector (VAD) is then proposed. The VAD utilizes the spread of eigenvalues of an autocorrelation matrix corresponding to the observed signal. Simulation results show that the proposed method yields an improved enhancement quality in comparison with the conventional SS based schemes.

  • Automatic Adjustment of Subband Likelihood Recombination Weights for Improving Noise-Robustness of a Multi-SNR Multi-Band Speaker Identification System

    Kenichi YOSHIDA  Kazuyuki TAKAGI  Kazuhiko OZEKI  

     
    PAPER-Speech and Hearing

      Vol:
    E87-D No:11
      Page(s):
    2453-2459

    This paper is concerned with improving noise-robustness of a multi-SNR multi-band speaker identification system by introducing automatic adjustment of subband likelihood recombination weights. The adjustment is performed on the basis of subband power calculated from the noise observed just before the speech starts in the input signal. To evaluate the noise-robustness of this system, text-independent speaker identification experiments were conducted on speech data corrupted with noises recorded in five environments: "bus," "car," "office," "lobby," and "restaurant". It was found that the present method reduces the identification error by 15.9% compared with the multi-SNR multi-band method with equal recombination weights at 0 dB SNR. The performance of the present method was compared with a clean fullband method in which a speaker model training is performed on clean speech data, and spectral subtraction is applied to the input signal in the speaker identification stage. When the clean fullband method without spectral subtraction is taken as a baseline, the multi-SNR multi-band method with automatic adjustment of recombination weights attained 56.8% error reduction on average, while the average error reduction rate of the clean fullband method with spectral subtraction was 11.4% at 0 dB SNR.

  • Blind Source Separation for Moving Speech Signals Using Blockwise ICA and Residual Crosstalk Subtraction

    Ryo MUKAI  Hiroshi SAWADA  Shoko ARAKI  Shoji MAKINO  

     
    PAPER-Speech/Acoustic Signal Processing

      Vol:
    E87-A No:8
      Page(s):
    1941-1948

    This paper describes a real-time blind source separation (BSS) method for moving speech signals in a room. Our method employs frequency domain independent component analysis (ICA) using a blockwise batch algorithm in the first stage, and the separated signals are refined by postprocessing using crosstalk component estimation and non-stationary spectral subtraction in the second stage. The blockwise batch algorithm achieves better performance than an online algorithm when sources are fixed, and the postprocessing compensates for performance degradation caused by source movement. Experimental results using speech signals recorded in a real room show that the proposed method realizes robust real-time separation for moving sources. Our method is implemented on a standard PC and works in realtime.

  • Filter Bank Subtraction for Robust Speech Recognition

    Kazuo ONOE  Hiroyuki SEGI  Takeshi KOBAYAKAWA  Shoei SATO  Shinichi HOMMA  Toru IMAI  Akio ANDO  

     
    PAPER-Robust Speech Recognition and Enhancement

      Vol:
    E86-D No:3
      Page(s):
    483-488

    In this paper, we propose a new technique of filter bank subtraction for robust speech recognition under various acoustic conditions. Spectral subtraction is a simple and useful technique for reducing the influence of additive noise. Conventional spectral subtraction assumes accurate estimation of the noise spectrum and no correlation between speech and noise. Those assumptions, however, are rarely satisfied in reality, leading to the degradation of speech recognition accuracy. Moreover, the recognition improvement attained by conventional methods is slight when the input SNR changes sharply. We propose a new method in which the output values of filter banks are used for noise estimation and subtraction. By estimating noise at each filter bank, instead of at each frequency point, the method alleviates the necessity for precise estimation of noise. We also take into consideration expected phase differences between the spectra of speech and noise in the subtraction and control a subtraction coefficient theoretically. Recognition experiments on test sets at several SNRs showed that the filter bank subtraction technique improved the word accuracy significantly and got better results than conventional spectral subtraction on all the test sets. In other experiments, on recognizing speech from TV news field reports with environmental noise, the proposed subtraction method yielded better results than the conventional method.

  • Speech Enhancement by Profile Fitting Method

    Osamu ICHIKAWA  Tetsuya TAKIGUCHI  Masafumi NISHIMURA  

     
    PAPER-Robust Speech Recognition and Enhancement

      Vol:
    E86-D No:3
      Page(s):
    514-521

    It is believed that distant-talking speech recognition in a noisy environment requires a large-scale microphone array. However, this cannot fit into small consumer devices. Our objective is to improve the performance with a limited number of microphones (preferably only left and right). In this paper, we focused on a profile that is the shape of the power distribution according to the beamforming direction. An observed profile can be decomposed into known profiles for directional sound sources and a non-directional background sound source. Evaluations confirmed this method reduced the CER (Character Error Ratio) for the dictation task by more than 20% compared to a conventional 2-channel Adaptive Spectral Subtraction beamformer in a non-reverberant environment.

  • Grey Filtering and Its Application to Speech Enhancement

    Cheng-Hsiung HSIEH  

     
    PAPER-Robust Speech Recognition and Enhancement

      Vol:
    E86-D No:3
      Page(s):
    522-533

    In this paper, a grey filtering approach based on GM(1,1) model is proposed. Then the grey filtering is applied to speech enhancement. The fundamental idea in the proposed grey filtering is to relate estimation error of GM(1,1) model to additive noise. The simulation results indicate that the additive noise can be estimated accurately by the proposed grey filtering approach with an appropriate scaling factor. Note that the spectral subtraction approach to speech enhancement is heavily dependent on the accuracy of statistics of additive noise and that the grey filtering is able to estimate additive noise appropriately. A magnitude spectral subtraction (MSS) approach for speech enhancement is proposed where the mechanism to determine the non-speech and speech portions is not required. Two examples are provided to justify the proposed MSS approach based on grey filtering. The simulation results show that the objective of speech enhancement has been achieved by the proposed MSS approach. Besides, the proposed MSS approach is compared with HFR-based approach in [4] and ZP approach in [5]. Simulation results indicate that in most of cases HFR-based and ZP approaches outperform the proposed MSS approach in SNRimp. However, the proposed MSS approach has better subjective listening quality than HFR-based and ZP approaches.

  • Spectral Subtraction Based on Statistical Criteria of the Spectral Distribution

    Hidetoshi NAKASHIMA  Yoshifumi CHISAKI  Tsuyoshi USAGAWA  Masanao EBATA  

     
    PAPER-Digital Signal Processing

      Vol:
    E85-A No:10
      Page(s):
    2283-2292

    This paper addresses the single channel speech enhancement method which utilizes the mean value and variance of the logarithmic noise power spectra. An important issue for single channel speech enhancement algorithm is to determine the trade-off point for the spectral distortion and residual noise. Thus the accurate discrimination between speech spectral and noise components is required. The conventional methods determine the trade-off point using parameters obtained experimentally. As a result spectral discrimination is not adequate. And the enhanced speech is deteriorated by spectral distortion or residual noise. Therefore, a criteria to determine the point is necessary. The proposed method determines the trade-off point of spectral distortion and residual noise level by discrimination between speech spectral and noise components based on statistical criteria. The spectral discrimination is performed using hypothesis testing that utilizes means and variances of the logarithmic power spectra. The discriminated spectral components are divided into speech-dominant spectral components and noise-dominant ones. For the speech-dominant ones, spectral subtraction is performed to minimize the spectral distortion. For the noise-dominant ones, attenuation is performed to reduce the noise level. The performance of the method is confirmed in terms of waveform, spectrogram, noise reduction level and speech recognition task. As a result, the noise reduction level and speech recognition rate are improved so that the method reduces the musical noise effectively and improves the enhanced speech quality.

1-20hit(25hit)