The search functionality is under construction.

Author Search Result

[Author] Wenming ZHENG(17hit)

1-17hit
  • Facial Expression Recognition Based on Sparse Locality Preserving Projection

    Jingjie YAN  Wenming ZHENG  Minghai XIN  Jingwei YAN  

     
    LETTER-Image

      Vol:
    E97-A No:7
      Page(s):
    1650-1653

    In this letter, a new sparse locality preserving projection (SLPP) algorithm is developed and applied to facial expression recognition. In comparison with the original locality preserving projection (LPP) algorithm, the presented SLPP algorithm is able to simultaneously find the intrinsic manifold of facial feature vectors and deal with facial feature selection. This is realized by the use of l1-norm regularization in the LPP objective function, which is directly formulated as a least squares regression pattern. We use two real facial expression databases (JAFFE and Ekman's POFA) to testify the proposed SLPP method and certain experiments show that the proposed SLPP approach respectively gains 77.60% and 82.29% on JAFFE and POFA database.

  • A novel Adaptive Weighted Transfer Subspace Learning Method for Cross-Database Speech Emotion Recognition

    Keke ZHAO  Peng SONG  Shaokai LI  Wenjing ZHANG  Wenming ZHENG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2022/06/09
      Vol:
    E105-D No:9
      Page(s):
    1643-1646

    In this letter, we present an adaptive weighted transfer subspace learning (AWTSL) method for cross-database speech emotion recognition (SER), which can efficiently eliminate the discrepancy between source and target databases. Specifically, on one hand, a subspace projection matrix is first learned to project the cross-database features into a common subspace. At the same time, each target sample can be represented by the source samples by using a sparse reconstruction matrix. On the other hand, we design an adaptive weighted matrix learning strategy, which can improve the reconstruction contribution of important features and eliminate the negative influence of redundant features. Finally, we conduct extensive experiments on four benchmark databases, and the experimental results demonstrate the efficacy of the proposed method.

  • Cross-Corpus Speech Emotion Recognition Based on Deep Domain-Adaptive Convolutional Neural Network

    Jiateng LIU  Wenming ZHENG  Yuan ZONG  Cheng LU  Chuangao TANG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2019/11/07
      Vol:
    E103-D No:2
      Page(s):
    459-463

    In this letter, we propose a novel deep domain-adaptive convolutional neural network (DDACNN) model to handle the challenging cross-corpus speech emotion recognition (SER) problem. The framework of the DDACNN model consists of two components: a feature extraction model based on a deep convolutional neural network (DCNN) and a domain-adaptive (DA) layer added in the DCNN utilizing the maximum mean discrepancy (MMD) criterion. We use labeled spectrograms from source speech corpus combined with unlabeled spectrograms from target speech corpus as the input of two classic DCNNs to extract the emotional features of speech, and train the model with a special mixed loss combined with a cross-entrophy loss and an MMD loss. Compared to other classic cross-corpus SER methods, the major advantage of the DDACNN model is that it can extract robust speech features which are time-frequency related by spectrograms and narrow the discrepancies between feature distribution of source corpus and target corpus to get better cross-corpus performance. Through several cross-corpus SER experiments, our DDACNN achieved the state-of-the-art performance on three public emotion speech corpora and is proved to handle the cross-corpus SER problem efficiently.

  • Micro-Expression Recognition by Regression Model and Group Sparse Spatio-Temporal Feature Learning

    Ping LU  Wenming ZHENG  Ziyan WANG  Qiang LI  Yuan ZONG  Minghai XIN  Lenan WU  

     
    LETTER-Pattern Recognition

      Pubricized:
    2016/02/29
      Vol:
    E99-D No:6
      Page(s):
    1694-1697

    In this letter, a micro-expression recognition method is investigated by integrating both spatio-temporal facial features and a regression model. To this end, we first perform a multi-scale facial region division for each facial image and then extract a set of local binary patterns on three orthogonal planes (LBP-TOP) features corresponding to divided facial regions of the micro-expression videos. Furthermore, we use GSLSR model to build the linear regression relationship between the LBP-TOP facial feature vectors and the micro expressions label vectors. Finally, the learned GSLSR model is applied to the prediction of the micro-expression categories for each test micro-expression video. Experiments are conducted on both CASME II and SMIC micro-expression databases to evaluate the performance of the proposed method, and the results demonstrate that the proposed method is better than the baseline micro-expression recognition method.

  • A Novel Transferable Sparse Regression Method for Cross-Database Facial Expression Recognition

    Wenjing ZHANG  Peng SONG  Wenming ZHENG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2021/10/12
      Vol:
    E105-D No:1
      Page(s):
    184-188

    In this letter, we propose a novel transferable sparse regression (TSR) method, for cross-database facial expression recognition (FER). In TSR, we firstly present a novel regression function to regress the data into a latent representation space instead of a strict binary label space. To further alleviate the influence of outliers and overfitting, we impose a row sparsity constraint on the regression term. And a pairwise relation term is introduced to guide the feature transfer learning. Secondly, we design a global graph to transfer knowledge, which can well preserve the cross-database manifold structure. Moreover, we introduce a low-rank constraint on the graph regularization term to uncover additional structural information. Finally, several experiments are conducted on three popular facial expression databases, and the results validate that the proposed TSR method is superior to other non-deep and deep transfer learning methods.

  • Unsupervised Cross-Database Micro-Expression Recognition Using Target-Adapted Least-Squares Regression

    Lingyan LI  Xiaoyan ZHOU  Yuan ZONG  Wenming ZHENG  Xiuzhen CHEN  Jingang SHI  Peng SONG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2019/03/26
      Vol:
    E102-D No:7
      Page(s):
    1417-1421

    Over the past several years, the research of micro-expression recognition (MER) has become an active topic in affective computing and computer vision because of its potential value in many application fields, e.g., lie detection. However, most previous works assumed an ideal scenario that both training and testing samples belong to the same micro-expression database, which is easily broken in practice. In this letter, we hence consider a more challenging scenario that the training and testing samples come from different micro-expression databases and investigated unsupervised cross-database MER in which the source database is labeled while the label information of target database is entirely unseen. To solve this interesting problem, we propose an effective method called target-adapted least-squares regression (TALSR). The basic idea of TALSR is to learn a regression coefficient matrix based on the source samples and their provided label information and also enable this learned regression coefficient matrix to suit the target micro-expression database. We are thus able to use the learned regression coefficient matrix to predict the micro-expression categories of the target micro-expression samples. Extensive experiments on CASME II and SMIC micro-expression databases are conducted to evaluate the proposed TALSR. The experimental results show that our TALSR has better performance than lots of recent well-performing domain adaptation methods in dealing with unsupervised cross-database MER tasks.

  • Target-Adapted Subspace Learning for Cross-Corpus Speech Emotion Recognition

    Xiuzhen CHEN  Xiaoyan ZHOU  Cheng LU  Yuan ZONG  Wenming ZHENG  Chuangao TANG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2019/08/26
      Vol:
    E102-D No:12
      Page(s):
    2632-2636

    For cross-corpus speech emotion recognition (SER), how to obtain effective feature representation for the discrepancy elimination of feature distributions between source and target domains is a crucial issue. In this paper, we propose a Target-adapted Subspace Learning (TaSL) method for cross-corpus SER. The TaSL method trys to find a projection subspace, where the feature regress the label more accurately and the gap of feature distributions in target and source domains is bridged effectively. Then, in order to obtain more optimal projection matrix, ℓ1 norm and ℓ2,1 norm penalty terms are added to different regularization terms, respectively. Finally, we conduct extensive experiments on three public corpuses, EmoDB, eNTERFACE and AFEW 4.0. The experimental results show that our proposed method can achieve better performance compared with the state-of-the-art methods in the cross-corpus SER tasks.

  • Speech Emotion Recognition Based on Sparse Transfer Learning Method

    Peng SONG  Wenming ZHENG  Ruiyu LIANG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2015/04/10
      Vol:
    E98-D No:7
      Page(s):
    1409-1412

    In traditional speech emotion recognition systems, when the training and testing utterances are obtained from different corpora, the recognition rates will decrease dramatically. To tackle this problem, in this letter, inspired from the recent developments of sparse coding and transfer learning, a novel sparse transfer learning method is presented for speech emotion recognition. Firstly, a sparse coding algorithm is employed to learn a robust sparse representation of emotional features. Then, a novel sparse transfer learning approach is presented, where the distance between the feature distributions of source and target datasets is considered and used to regularize the objective function of sparse coding. The experimental results demonstrate that, compared with the automatic recognition approach, the proposed method achieves promising improvements on recognition rates and significantly outperforms the classic dimension reduction based transfer learning approach.

  • A Novel Discriminative Virtual Label Regression Method for Unsupervised Feature Selection

    Zihao SONG  Peng SONG  Chao SHENG  Wenming ZHENG  Wenjing ZHANG  Shaokai LI  

     
    LETTER-Pattern Recognition

      Pubricized:
    2021/10/19
      Vol:
    E105-D No:1
      Page(s):
    175-179

    Unsupervised Feature selection is an important dimensionality reduction technique to cope with high-dimensional data. It does not require prior label information, and has recently attracted much attention. However, it cannot fully utilize the discriminative information of samples, which may affect the feature selection performance. To tackle this problem, in this letter, we propose a novel discriminative virtual label regression method (DVLR) for unsupervised feature selection. In DVLR, we develop a virtual label regression function to guide the subspace learning based feature selection, which can select more discriminative features. Moreover, a linear discriminant analysis (LDA) term is used to make the model be more discriminative. To further make the model be more robust and select more representative features, we impose the ℓ2,1-norm on the regression and feature selection terms. Finally, extensive experiments are carried out on several public datasets, and the results demonstrate that our proposed DVLR achieves better performance than several state-of-the-art unsupervised feature selection methods.

  • Micro-Expression Recognition by Leveraging Color Space Information

    Minghao TANG  Yuan ZONG  Wenming ZHENG  Jisheng DAI  Jingang SHI  Peng SONG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2019/03/13
      Vol:
    E102-D No:6
      Page(s):
    1222-1226

    Micro-expression is one type of special facial expressions and usually occurs when people try to hide their true emotions. Therefore, recognizing micro-expressions has potential values in lots of applications, e.g., lie detection. In this letter, we focus on such a meaningful topic and investigate how to make full advantage of the color information provided by the micro-expression samples to deal with the micro-expression recognition (MER) problem. To this end, we propose a novel method called color space fusion learning (CSFL) model to fuse the spatiotemporal features extracted in different color space such that the fused spatiotemporal features would be better at describing micro-expressions. To verify the effectiveness of the proposed CSFL method, extensive MER experiments on a widely-used spatiotemporal micro-expression database SMIC is conducted. The experimental results show that the CSFL can significantly improve the performance of spatiotemporal features in coping with MER tasks.

  • Speaker-Independent Speech Emotion Recognition Based on Two-Layer Multiple Kernel Learning

    Yun JIN  Peng SONG  Wenming ZHENG  Li ZHAO  Minghai XIN  

     
    LETTER-Speech and Hearing

      Vol:
    E96-D No:10
      Page(s):
    2286-2289

    In this paper, a two-layer Multiple Kernel Learning (MKL) scheme for speaker-independent speech emotion recognition is presented. In the first layer, MKL is used for feature selection. The training samples are separated into n groups according to some rules. All groups are used for feature selection to obtain n sparse feature subsets. The intersection and the union of all feature subsets are the result of our feature selection methods. In the second layer, MKL is used again for speech emotion classification with the selected features. In order to evaluate the effectiveness of our proposed two-layer MKL scheme, we compare it with state-of-the-art results. It is shown that our scheme results in large gain in performance. Furthermore, another experiment is carried out to compare our feature selection method with other popular ones. And the result proves the effectiveness of our feature selection method.

  • A Novel Iterative Speaker Model Alignment Method from Non-Parallel Speech for Voice Conversion

    Peng SONG  Wenming ZHENG  Xinran ZHANG  Yun JIN  Cheng ZHA  Minghai XIN  

     
    LETTER-Speech and Hearing

      Vol:
    E98-A No:10
      Page(s):
    2178-2181

    Most of the current voice conversion methods are conducted based on parallel speech, which is not easily obtained in practice. In this letter, a novel iterative speaker model alignment (ISMA) method is proposed to address this problem. First, the source and target speaker models are each trained from the background model by adopting maximum a posteriori (MAP) algorithm. Then, a novel ISMA method is presented for alignment and transformation of spectral features. Finally, the proposed ISMA approach is further combined with a Gaussian mixture model (GMM) to improve the conversion performance. A series of objective and subjective experiments are carried out on CMU ARCTIC dataset, and the results demonstrate that the proposed method significantly outperforms the state-of-the-art approach.

  • Learning Corpus-Invariant Discriminant Feature Representations for Speech Emotion Recognition

    Peng SONG  Shifeng OU  Zhenbin DU  Yanyan GUO  Wenming MA  Jinglei LIU  Wenming ZHENG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2017/02/02
      Vol:
    E100-D No:5
      Page(s):
    1136-1139

    As a hot topic of speech signal processing, speech emotion recognition methods have been developed rapidly in recent years. Some satisfactory results have been achieved. However, it should be noted that most of these methods are trained and evaluated on the same corpus. In reality, the training data and testing data are often collected from different corpora, and the feature distributions of different datasets often follow different distributions. These discrepancies will greatly affect the recognition performance. To tackle this problem, a novel corpus-invariant discriminant feature representation algorithm, called transfer discriminant analysis (TDA), is presented for speech emotion recognition. The basic idea of TDA is to integrate the kernel LDA algorithm and the similarity measurement of distributions into one objective function. Experimental results under the cross-corpus conditions show that our proposed method can significantly improve the recognition rates.

  • A Joint Convolutional Bidirectional LSTM Framework for Facial Expression Recognition

    Jingwei YAN  Wenming ZHENG  Zhen CUI  Peng SONG  

     
    LETTER-Biocybernetics, Neurocomputing

      Pubricized:
    2018/01/11
      Vol:
    E101-D No:4
      Page(s):
    1217-1220

    Facial expressions are generated by the actions of the facial muscles located at different facial regions. The spatial dependencies of different spatial facial regions are worth exploring and can improve the performance of facial expression recognition. In this letter we propose a joint convolutional bidirectional long short-term memory (JCBLSTM) framework to model the discriminative facial textures and spatial relations between different regions jointly. We treat each row or column of feature maps output from CNN as individual ordered sequence and employ LSTM to model the spatial dependencies within it. Moreover, a shortcut connection for convolutional feature maps is introduced for joint feature representation. We conduct experiments on two databases to evaluate the proposed JCBLSTM method. The experimental results demonstrate that the JCBLSTM method achieves state-of-the-art performance on Multi-PIE and very competitive result on FER-2013.

  • Transfer Semi-Supervised Non-Negative Matrix Factorization for Speech Emotion Recognition

    Peng SONG  Shifeng OU  Xinran ZHANG  Yun JIN  Wenming ZHENG  Jinglei LIU  Yanwei YU  

     
    LETTER-Speech and Hearing

      Pubricized:
    2016/07/01
      Vol:
    E99-D No:10
      Page(s):
    2647-2650

    In practice, emotional speech utterances are often collected from different devices or conditions, which will lead to discrepancy between the training and testing data, resulting in sharp decrease of recognition rates. To solve this problem, in this letter, a novel transfer semi-supervised non-negative matrix factorization (TSNMF) method is presented. A semi-supervised negative matrix factorization algorithm, utilizing both labeled source and unlabeled target data, is adopted to learn common feature representations. Meanwhile, the maximum mean discrepancy (MMD) as a similarity measurement is employed to reduce the distance between the feature distributions of two databases. Finally, the TSNMF algorithm, which optimizes the SNMF and MMD functions together, is proposed to obtain robust feature representations across databases. Extensive experiments demonstrate that in comparison to the state-of-the-art approaches, our proposed method can significantly improve the cross-corpus recognition rates.

  • Shared Latent Embedding Learning for Multi-View Subspace Clustering

    Zhaohu LIU  Peng SONG  Jinshuai MU  Wenming ZHENG  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/10/17
      Vol:
    E107-D No:1
      Page(s):
    148-152

    Most existing multi-view subspace clustering approaches only capture the inter-view similarities between different views and ignore the optimal local geometric structure of the original data. To this end, in this letter, we put forward a novel method named shared latent embedding learning for multi-view subspace clustering (SLE-MSC), which can efficiently capture a better latent space. To be specific, we introduce a pseudo-label constraint to capture the intra-view similarities within each view. Meanwhile, we utilize a novel optimal graph Laplacian to learn the consistent latent representation, in which the common manifold is considered as the optimal manifold to obtain a more reasonable local geometric structure. Comprehensive experimental results indicate the superiority and effectiveness of the proposed method.

  • Integrating Facial Expression and Body Gesture in Videos for Emotion Recognition

    Jingjie YAN  Wenming ZHENG  Minhai XIN  Jingwei YAN  

     
    LETTER-Pattern Recognition

      Vol:
    E97-D No:3
      Page(s):
    610-613

    In this letter, we research the method of using face and gesture image sequences to deal with the video-based bimodal emotion recognition problem, in which both Harris plus cuboids spatio-temporal feature (HST) and sparse canonical correlation analysis (SCCA) fusion method are applied to this end. To efficaciously pick up the spatio-temporal features, we adopt the Harris 3D feature detector proposed by Laptev and Lindeberg to find the points from both face and gesture videos, and then apply the cuboids feature descriptor to extract the facial expression and gesture emotion features [1],[2]. To further extract the common emotion features from both facial expression feature set and gesture feature set, the SCCA method is applied and the extracted emotion features are used for the biomodal emotion classification, where the K-nearest neighbor classifier and the SVM classifier are respectively used for this purpose. We test this method on the biomodal face and body gesture (FABO) database and the experimental results demonstrate the better recognition accuracy compared with other methods.