The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Meng SUN(17hit)

1-17hit
  • Off-Grid Frequency Estimation with Random Measurements

    Xushan CHEN  Jibin YANG  Meng SUN  Jianfeng LI  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:11
      Page(s):
    2493-2497

    In order to significantly reduce the time and space needed, compressive sensing builds upon the fundamental assumption of sparsity under a suitable discrete dictionary. However, in many signal processing applications there exists mismatch between the assumed and the true sparsity bases, so that the actual representative coefficients do not lie on the finite grid discretized by the assumed dictionary. Unlike previous work this paper introduces the unified compressive measurement operator into atomic norm denoising and investigates the problems of recovering the frequency support of a combination of multiple sinusoids from sub-Nyquist samples. We provide some useful properties to ensure the optimality of the unified framework via semidefinite programming (SDP). We also provide a sufficient condition to guarantee the uniqueness of the optimizer with high probability. Theoretical results demonstrate the proposed method can locate the nonzero coefficients on an infinitely dense grid over a wide range of SNR case.

  • On the Complementary Role of DNN Multi-Level Enhancement for Noisy Robust Speaker Recognition in an I-Vector Framework

    Xingyu ZHANG  Xia ZOU  Meng SUN  Penglong WU  Yimin WANG  Jun HE  

     
    LETTER-Speech and Hearing

      Vol:
    E103-A No:1
      Page(s):
    356-360

    In order to improve the noise robustness of automatic speaker recognition, many techniques on speech/feature enhancement have been explored by using deep neural networks (DNN). In this work, a DNN multi-level enhancement (DNN-ME), which consists of the stages of signal enhancement, cepstrum enhancement and i-vector enhancement, is proposed for text-independent speaker recognition. Given the fact that these enhancement methods are applied in different stages of the speaker recognition pipeline, it is worth exploring the complementary role of these methods, which benefits the understanding of the pros and cons of the enhancements of different stages. In order to use the capabilities of DNN-ME as much as possible, two kinds of methods called Cascaded DNN-ME and joint input of DNNs are studied. Weighted Gaussian mixture models (WGMMs) proposed in our previous work is also applied to further improve the model's performance. Experiments conducted on the Speakers in the Wild (SITW) database have shown that DNN-ME demonstrated significant superiority over the systems with only a single enhancement for noise robust speaker recognition. Compared with the i-vector baseline, the equal error rate (EER) was reduced from 5.75 to 4.01.

  • A Video Salient Region Detection Framework Using Spatiotemporal Consistency Optimization

    Yunfei ZHENG  Xiongwei ZHANG  Lei BAO  Tieyong CAO  Yonggang HU  Meng SUN  

     
    PAPER-Image

      Vol:
    E100-A No:2
      Page(s):
    688-701

    Labeling a salient region accurately in video with cluttered background and complex motion condition is still a challenging work. Most existing video salient region detection models mainly extract the stimulus-driven saliency features to detect the salient region in video. They are easily influenced by the cluttered background and complex motion conditions. It may lead to incomplete or wrong detection results. In this paper, we propose a video salient region detection framework by fusing the stimulus-driven saliency features and spatiotemporal consistency cue to improve the performance of detection under these complex conditions. On one hand, stimulus-driven spatial saliency features and temporal saliency features are extracted effectively to derive the initial spatial and temporal salient region map. On the other hand, in order to make use of the spatiotemporal consistency cue, an effective spatiotemporal consistency optimization model is presented. We use this model optimize the initial spatial and temporal salient region map. Then the superpixel-level spatiotemporal salient region map is derived by optimizing the initial spatiotemporal salient region map. Finally, the pixel-level spatiotemporal salient region map is derived by solving a self-defined energy model. Experimental results on the challenging video datasets demonstrate that the proposed video salient region detection framework outperforms state-of-the-art methods.

  • Speech Enhancement Combining NMF Weighted by Speech Presence Probability and Statistical Model

    Yonggang HU  Xiongwei ZHANG  Xia ZOU  Gang MIN  Meng SUN  Yunfei ZHENG  

     
    LETTER-Speech and Hearing

      Vol:
    E98-A No:12
      Page(s):
    2701-2704

    The conventional non-negative matrix factorization (NMF)-based speech enhancement is accomplished by updating iteratively with the prior knowledge of the clean speech and noise spectra bases. With the probabilistic estimation of whether the speech is present or not in a certain frame, this letter proposes a speech enhancement algorithm incorporating the speech presence probability (SPP) obtained via noise estimation to the NMF process. To take advantage of both the NMF-based and statistical model-based approaches, the final enhanced speech is achieved by applying a statistical model-based filter to the output of the SPP weighted NMF. Objective evaluations using perceptual evaluation of speech quality (PESQ) on TIMIT with 20 noise types at various signal-to-noise ratio (SNR) levels demonstrate the superiority of the proposed algorithm over the conventional NMF and statistical model-based baselines.

  • Semi-Supervised Speech Enhancement Combining Nonnegative Matrix Factorization and Robust Principal Component Analysis

    Yonggang HU  Xiongwei ZHANG  Xia ZOU  Meng SUN  Yunfei ZHENG  Gang MIN  

     
    LETTER-Speech and Hearing

      Vol:
    E100-A No:8
      Page(s):
    1714-1719

    Nonnegative matrix factorization (NMF) is one of the most popular machine learning tools for speech enhancement. The supervised NMF-based speech enhancement is accomplished by updating iteratively with the prior knowledge of the clean speech and noise spectra bases. However, in many real-world scenarios, it is not always possible for conducting any prior training. The traditional semi-supervised NMF (SNMF) version overcomes this shortcoming while the performance degrades. In this letter, without any prior knowledge of the speech and noise, we present an improved semi-supervised NMF-based speech enhancement algorithm combining techniques of NMF and robust principal component analysis (RPCA). In this approach, fixed speech bases are obtained from the training samples chosen from public dateset offline. The noise samples used for noise bases training, instead of characterizing a priori as usual, can be obtained via RPCA algorithm on the fly. This letter also conducts a study on the assumption whether the time length of the estimated noise samples may have an effect on the performance of the algorithm. Three metrics, including PESQ, SDR and SNR are applied to evaluate the performance of the algorithms by making experiments on TIMIT with 20 noise types at various signal-to-noise ratio levels. Extensive experimental results demonstrate the superiority of the proposed algorithm over the competing speech enhancement algorithm.

  • Automatic Model Order Selection for Convolutive Non-Negative Matrix Factorization

    Yinan LI  Xiongwei ZHANG  Meng SUN  Chong JIA  Xia ZOU  

     
    LETTER-Speech and Hearing

      Vol:
    E99-A No:10
      Page(s):
    1867-1870

    Exploring a parsimonious model that is just enough to represent the temporal dependency of time serial signals such as audio or speech is a practical requirement for many signal processing applications. A well suited method for intuitively and efficiently representing magnitude spectra is to use convolutive non-negative matrix factorization (CNMF) to discover the temporal relationship among nearby frames. However, the model order selection problem in CNMF, i.e., the choice of the number of convolutive bases, has seldom been investigated ever. In this paper, we propose a novel Bayesian framework that can automatically learn the optimal model order through maximum a posteriori (MAP) estimation. The proposed method yields a parsimonious and low-rank approximation by removing the redundant bases iteratively. We conducted intuitive experiments to show that the proposed algorithm is very effective in automatically determining the correct model order.

  • Unsupervised Learning of Continuous Density HMM for Variable-Length Spoken Unit Discovery

    Meng SUN  Hugo VAN HAMME  Yimin WANG  Xiongwei ZHANG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2015/10/21
      Vol:
    E99-D No:1
      Page(s):
    296-299

    Unsupervised spoken unit discovery or zero-source speech recognition is an emerging research topic which is important for spoken document analysis of languages or dialects with little human annotation. In this paper, we extend our earlier joint training framework for unsupervised learning of discrete density HMM to continuous density HMM (CDHMM) and apply it to spoken unit discovery. In the proposed recipe, we first cluster a group of Gaussians which then act as initializations to the joint training framework of nonnegative matrix factorization and semi-continuous density HMM (SCDHMM). In SCDHMM, all the hidden states share the same group of Gaussians but with different mixture weights. A CDHMM is subsequently constructed by tying the top-N activated Gaussians to each hidden state. Baum-Welch training is finally conducted to update the parameters of the Gaussians, mixture weights and HMM transition probabilities. Experiments were conducted on word discovery from TIDIGITS and phone discovery from TIMIT. For TIDIGITS, units were modeled by 10 states which turn out to be strongly related to words; while for TIMIT, units were modeled by 3 states which are likely to be phonemes.

  • Parallel Feature Network For Saliency Detection

    Zheng FANG  Tieyong CAO  Jibin YANG  Meng SUN  

     
    LETTER-Image

      Vol:
    E102-A No:2
      Page(s):
    480-485

    Saliency detection is widely used in many vision tasks like image retrieval, compression and person re-identification. The deep-learning methods have got great results but most of them focused more on the performance ignored the efficiency of models, which were hard to transplant into other applications. So how to design a efficient model has became the main problem. In this letter, we propose parallel feature network, a saliency model which is built on convolution neural network (CNN) by a parallel method. Parallel dilation blocks are first used to extract features from different layers of CNN, then a parallel upsampling structure is adopted to upsample feature maps. Finally saliency maps are obtained by fusing summations and concatenations of feature maps. Our final model built on VGG-16 is much smaller and faster than existing saliency models and also achieves state-of-the-art performance.

  • Spectra Restoration of Bone-Conducted Speech via Attention-Based Contextual Information and Spectro-Temporal Structure Constraint Open Access

    Changyan ZHENG  Tieyong CAO  Jibin YANG  Xiongwei ZHANG  Meng SUN  

     
    LETTER-Digital Signal Processing

      Vol:
    E102-A No:12
      Page(s):
    2001-2007

    Compared with acoustic microphone (AM) speech, bone-conducted microphone (BCM) speech is much immune to background noise, but suffers from severe loss of information due to the characteristics of the human-body transmission channel. In this letter, a new method for the speaker-dependent BCM speech enhancement is proposed, in which we focus our attention on the spectra restoration of the distorted speech. In order to better infer the missing components, an attention-based bidirectional Long Short-Term Memory (AB-BLSTM) is designed to optimize the use of contextual information to model the relationship between the spectra of BCM speech and its corresponding clean AM speech. Meanwhile, a structural error metric, Structural SIMilarity (SSIM) metric, originated from image processing is proposed to be the loss function, which provides the constraint of the spectro-temporal structures in recovering of the spectra. Experiments demonstrate that compared with approaches based on conventional DNN and mean square error (MSE), the proposed method can better recover the missing phonemes and obtain spectra with spectro-temporal structure more similar to the target one, which leads to great improvement on objective metrics.

  • Deep Neural Network Based Monaural Speech Enhancement with Low-Rank Analysis and Speech Present Probability

    Wenhua SHI  Xiongwei ZHANG  Xia ZOU  Meng SUN  Wei HAN  Li LI  Gang MIN  

     
    LETTER-Noise and Vibration

      Vol:
    E101-A No:3
      Page(s):
    585-589

    A monaural speech enhancement method combining deep neural network (DNN) with low rank analysis and speech present probability is proposed in this letter. Low rank and sparse analysis is first applied on the noisy speech spectrogram to get the approximate low rank representation of noise. Then a joint feature training strategy for DNN based speech enhancement is presented, which helps the DNN better predict the target speech. To reduce the residual noise in highly overlapping regions and high frequency domain, speech present probability (SPP) weighted post-processing is employed to further improve the quality of the speech enhanced by trained DNN model. Compared with the supervised non-negative matrix factorization (NMF) and the conventional DNN method, the proposed method obtains improved speech enhancement performance under stationary and non-stationary conditions.

  • Improved Semi-Supervised NMF Based Real-Time Capable Speech Enhancement

    Yonggang HU  Xiongwei ZHANG  Xia ZOU  Meng SUN  Gang MIN  Yinan LI  

     
    LETTER-Speech and Hearing

      Vol:
    E99-A No:1
      Page(s):
    402-406

    Nonnegative matrix factorization (NMF) is one of the most popular tools for speech enhancement. In this letter, we present an improved semi-supervised NMF (ISNMF)-based speech enhancement algorithm combining techniques of noise estimation and Incremental NMF (INMF). In this approach, fixed speech bases are obtained from training samples offline in advance while noise bases are trained on-the-fly whenever new noisy frame arrives. The INMF algorithm is adopted for noise bases learning because it can overcome the difficulties that conventional NMF confronts in online processing. The proposed algorithm is real-time capable in the sense that it processes the time frames of the noisy speech one by one and the computational complexity is feasible. Four different objective evaluation measures at various signal-to-noise ratio (SNR) levels demonstrate the superiority of the proposed method over traditional semi-supervised NMF (SNMF) and well-known robust principal component analysis (RPCA) algorithm.

  • A Perceptually Motivated Approach for Speech Enhancement Based on Deep Neural Network

    Wei HAN  Xiongwei ZHANG  Gang MIN  Meng SUN  

     
    LETTER-Speech and Hearing

      Vol:
    E99-A No:4
      Page(s):
    835-838

    In this letter, a novel perceptually motivated single channel speech enhancement approach based on Deep Neural Network (DNN) is presented. Taking into account the good masking properties of the human auditory system, a new DNN architecture is proposed to reduce the perceptual effect of the residual noise. This new DNN architecture is directly trained to learn a gain function which is used to estimate the power spectrum of clean speech and shape the spectrum of the residual noise at the same time. Experimental results demonstrate that the proposed perceptually motivated speech enhancement approach could achieve better objective speech quality when tested with TIMIT sentences corrupted by various types of noise, no matter whether the noise conditions are included in the training set or not.

  • An Improved Supervised Speech Separation Method Based on Perceptual Weighted Deep Recurrent Neural Networks

    Wei HAN  Xiongwei ZHANG  Meng SUN  Li LI  Wenhua SHI  

     
    LETTER-Speech and Hearing

      Vol:
    E100-A No:2
      Page(s):
    718-721

    In this letter, we propose a novel speech separation method based on perceptual weighted deep recurrent neural network (DRNN) which incorporate the masking properties of the human auditory system. In supervised training stage, we firstly utilize the clean label speech of two different speakers to calculate two perceptual weighting matrices. Then, the obtained different perceptual weighting matrices are utilized to adjust the mean squared error between the network outputs and the reference features of both the two clean speech so that the two different speech can mask each other. Experimental results on TSP speech corpus demonstrate that the proposed speech separation approach can achieve significant improvements over the state-of-the-art methods when tested with different mixing cases.

  • Cramer-Rao Bounds for Compressive Frequency Estimation

    Xushan CHEN  Xiongwei ZHANG  Jibin YANG  Meng SUN  Weiwei YANG  

     
    LETTER-Digital Signal Processing

      Vol:
    E98-A No:3
      Page(s):
    874-877

    Compressive sensing (CS) exploits the sparsity or compressibility of signals to recover themselves from a small set of nonadaptive, linear measurements. The number of measurements is much smaller than Nyquist-rate, thus signal recovery is achieved at relatively expense. Thus, many signal processing problems which do not require exact signal recovery have attracted considerable attention recently. In this paper, we establish a framework for parameter estimation of a signal corrupted by additive colored Gaussian noise (ACGN) based on compressive measurements. We also derive the Cramer-Rao lower bound (CRB) for the frequency estimation problems in compressive domain and prove some useful properties of the CRB under different compressive measurements. Finally, we show that the theoretical conclusions are along with experimental results.

  • Online Convolutive Non-Negative Bases Learning for Speech Enhancement

    Yinan LI  Xiongwei ZHANG  Meng SUN  Yonggang HU  Li LI  

     
    LETTER-Speech and Hearing

      Vol:
    E99-A No:8
      Page(s):
    1609-1613

    An online version of convolutive non-negative sparse coding (CNSC) with the generalized Kullback-Leibler (K-L) divergence is proposed to adaptively learn spectral-temporal bases from speech streams. The proposed scheme processes training data piece-by-piece and incrementally updates learned bases with accumulated statistics to overcome the inefficiency of its offline counterpart in processing large scale or streaming data. Compared to conventional non-negative sparse coding, we utilize the convolutive model within bases, so that each basis is capable of describing a relatively long temporal span of signals, which helps to improve the representation power of the model. Moreover, by incorporating a voice activity detector (VAD), we propose an unsupervised enhancement algorithm that updates the noise dictionary adaptively from non-speech intervals. Meanwhile, for the speech intervals, one can adaptively learn the speech bases by keeping the noise ones fixed. Experimental results show that the proposed algorithm outperforms the competing algorithms substantially, especially when the background noise is highly non-stationary.

  • Joint Optimization of Perceptual Gain Function and Deep Neural Networks for Single-Channel Speech Enhancement

    Wei HAN  Xiongwei ZHANG  Gang MIN  Xingyu ZHOU  Meng SUN  

     
    LETTER-Noise and Vibration

      Vol:
    E100-A No:2
      Page(s):
    714-717

    In this letter, we explore joint optimization of perceptual gain function and deep neural networks (DNNs) for a single-channel speech enhancement task. A DNN architecture is proposed which incorporates the masking properties of the human auditory system to make the residual noise inaudible. This new DNN architecture directly trains a perceptual gain function which is used to estimate the magnitude spectrum of clean speech from noisy speech features. Experimental results demonstrate that the proposed speech enhancement approach can achieve significant improvements over the baselines when tested with TIMIT sentences corrupted by various types of noise, no matter whether the noise conditions are included in the training set or not.

  • Multi-Feature Fusion Network for Salient Region Detection

    Zheng FANG  Tieyong CAO  Jibin YANG  Meng SUN  

     
    PAPER-Image

      Vol:
    E102-A No:6
      Page(s):
    834-841

    Salient region detection is a fundamental problem in computer vision and image processing. Deep learning models perform better than traditional approaches but suffer from their huge parameters and slow speeds. To handle these problems, in this paper we propose the multi-feature fusion network (MFFN) - a efficient salient region detection architecture based on Convolution Neural Network (CNN). A novel feature extraction structure is designed to obtain feature maps from CNN. A fusion dense block is used to fuse all low-level and high-level feature maps to derive salient region results. MFFN is an end-to-end architecture which does not need any post-processing procedures. Experiments on the benchmark datasets demonstrate that MFFN achieves the state-of-the-art performance on salient region detection and requires much less parameters and computation time. Ablation experiments demonstrate the effectiveness of each module in MFFN.