The search functionality is under construction.

Author Search Result

[Author] Yuan ZONG(7hit)

1-7hit
  • Micro-Expression Recognition by Leveraging Color Space Information

    Minghao TANG  Yuan ZONG  Wenming ZHENG  Jisheng DAI  Jingang SHI  Peng SONG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2019/03/13
      Vol:
    E102-D No:6
      Page(s):
    1222-1226

    Micro-expression is one type of special facial expressions and usually occurs when people try to hide their true emotions. Therefore, recognizing micro-expressions has potential values in lots of applications, e.g., lie detection. In this letter, we focus on such a meaningful topic and investigate how to make full advantage of the color information provided by the micro-expression samples to deal with the micro-expression recognition (MER) problem. To this end, we propose a novel method called color space fusion learning (CSFL) model to fuse the spatiotemporal features extracted in different color space such that the fused spatiotemporal features would be better at describing micro-expressions. To verify the effectiveness of the proposed CSFL method, extensive MER experiments on a widely-used spatiotemporal micro-expression database SMIC is conducted. The experimental results show that the CSFL can significantly improve the performance of spatiotemporal features in coping with MER tasks.

  • Group Sparse Reduced Rank Tensor Regression for Micro-Expression Recognition

    Sunan LI  Yuan ZONG  Cheng LU  Chuangan TANG  Yan ZHAO  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2023/01/05
      Vol:
    E106-D No:4
      Page(s):
    575-578

    To overcome the challenge in micro-expression recognition that it only emerge in several small facial regions with low intensity, some researchers proposed facial region partition mechanisms and introduced group sparse learning methods for feature selection. However, such methods have some shortcomings, including the complexity of region division and insufficient utilization of critical facial regions. To address these problems, we propose a novel Group Sparse Reduced Rank Tensor Regression (GSRRTR) to transform the fearure matrix into a tensor by laying blocks and features in different dimensions. So we can process grids and texture features separately and avoid interference between grids and features. Furthermore, with the use of Tucker decomposition, the feature tensor can be decomposed into a product of core tensor and a set of matrix so that the number of parameters and the computational complexity of the scheme will decreased. To evaluate the performance of the proposed micro-expression recognition method, extensive experiments are conducted on two micro expression databases: CASME2 and SMIC. The experimental results show that the proposed method achieves comparable recognition rate with less parameters than state-of-the-art methods.

  • Cross-Corpus Speech Emotion Recognition Based on Deep Domain-Adaptive Convolutional Neural Network

    Jiateng LIU  Wenming ZHENG  Yuan ZONG  Cheng LU  Chuangao TANG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2019/11/07
      Vol:
    E103-D No:2
      Page(s):
    459-463

    In this letter, we propose a novel deep domain-adaptive convolutional neural network (DDACNN) model to handle the challenging cross-corpus speech emotion recognition (SER) problem. The framework of the DDACNN model consists of two components: a feature extraction model based on a deep convolutional neural network (DCNN) and a domain-adaptive (DA) layer added in the DCNN utilizing the maximum mean discrepancy (MMD) criterion. We use labeled spectrograms from source speech corpus combined with unlabeled spectrograms from target speech corpus as the input of two classic DCNNs to extract the emotional features of speech, and train the model with a special mixed loss combined with a cross-entrophy loss and an MMD loss. Compared to other classic cross-corpus SER methods, the major advantage of the DDACNN model is that it can extract robust speech features which are time-frequency related by spectrograms and narrow the discrepancies between feature distribution of source corpus and target corpus to get better cross-corpus performance. Through several cross-corpus SER experiments, our DDACNN achieved the state-of-the-art performance on three public emotion speech corpora and is proved to handle the cross-corpus SER problem efficiently.

  • Micro-Expression Recognition by Regression Model and Group Sparse Spatio-Temporal Feature Learning

    Ping LU  Wenming ZHENG  Ziyan WANG  Qiang LI  Yuan ZONG  Minghai XIN  Lenan WU  

     
    LETTER-Pattern Recognition

      Pubricized:
    2016/02/29
      Vol:
    E99-D No:6
      Page(s):
    1694-1697

    In this letter, a micro-expression recognition method is investigated by integrating both spatio-temporal facial features and a regression model. To this end, we first perform a multi-scale facial region division for each facial image and then extract a set of local binary patterns on three orthogonal planes (LBP-TOP) features corresponding to divided facial regions of the micro-expression videos. Furthermore, we use GSLSR model to build the linear regression relationship between the LBP-TOP facial feature vectors and the micro expressions label vectors. Finally, the learned GSLSR model is applied to the prediction of the micro-expression categories for each test micro-expression video. Experiments are conducted on both CASME II and SMIC micro-expression databases to evaluate the performance of the proposed method, and the results demonstrate that the proposed method is better than the baseline micro-expression recognition method.

  • Unsupervised Cross-Database Micro-Expression Recognition Using Target-Adapted Least-Squares Regression

    Lingyan LI  Xiaoyan ZHOU  Yuan ZONG  Wenming ZHENG  Xiuzhen CHEN  Jingang SHI  Peng SONG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2019/03/26
      Vol:
    E102-D No:7
      Page(s):
    1417-1421

    Over the past several years, the research of micro-expression recognition (MER) has become an active topic in affective computing and computer vision because of its potential value in many application fields, e.g., lie detection. However, most previous works assumed an ideal scenario that both training and testing samples belong to the same micro-expression database, which is easily broken in practice. In this letter, we hence consider a more challenging scenario that the training and testing samples come from different micro-expression databases and investigated unsupervised cross-database MER in which the source database is labeled while the label information of target database is entirely unseen. To solve this interesting problem, we propose an effective method called target-adapted least-squares regression (TALSR). The basic idea of TALSR is to learn a regression coefficient matrix based on the source samples and their provided label information and also enable this learned regression coefficient matrix to suit the target micro-expression database. We are thus able to use the learned regression coefficient matrix to predict the micro-expression categories of the target micro-expression samples. Extensive experiments on CASME II and SMIC micro-expression databases are conducted to evaluate the proposed TALSR. The experimental results show that our TALSR has better performance than lots of recent well-performing domain adaptation methods in dealing with unsupervised cross-database MER tasks.

  • Joint Patch Weighting and Moment Matching for Unsupervised Domain Adaptation in Micro-Expression Recognition

    Jie ZHU  Yuan ZONG  Hongli CHANG  Li ZHAO  Chuangao TANG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2021/11/17
      Vol:
    E105-D No:2
      Page(s):
    441-445

    Unsupervised domain adaptation (DA) is a challenging machine learning problem since the labeled training (source) and unlabeled testing (target) sets belong to different domains and then have different feature distributions, which has recently attracted wide attention in micro-expression recognition (MER). Although some well-performing unsupervised DA methods have been proposed, these methods cannot well solve the problem of unsupervised DA in MER, a. k. a., cross-domain MER. To deal with such a challenging problem, in this letter we propose a novel unsupervised DA method called Joint Patch weighting and Moment Matching (JPMM). JPMM bridges the source and target micro-expression feature sets by minimizing their probability distribution divergence with a multi-order moment matching operation. Meanwhile, it takes advantage of the contributive facial patches by the weight learning such that a domain-invariant feature representation involving micro-expression distinguishable information can be learned. Finally, we carry out extensive experiments to evaluate the proposed JPMM method is superior to recent state-of-the-art unsupervised DA methods in dealing with cross-domain MER.

  • Target-Adapted Subspace Learning for Cross-Corpus Speech Emotion Recognition

    Xiuzhen CHEN  Xiaoyan ZHOU  Cheng LU  Yuan ZONG  Wenming ZHENG  Chuangao TANG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2019/08/26
      Vol:
    E102-D No:12
      Page(s):
    2632-2636

    For cross-corpus speech emotion recognition (SER), how to obtain effective feature representation for the discrepancy elimination of feature distributions between source and target domains is a crucial issue. In this paper, we propose a Target-adapted Subspace Learning (TaSL) method for cross-corpus SER. The TaSL method trys to find a projection subspace, where the feature regress the label more accurately and the gap of feature distributions in target and source domains is bridged effectively. Then, in order to obtain more optimal projection matrix, ℓ1 norm and ℓ2,1 norm penalty terms are added to different regularization terms, respectively. Finally, we conduct extensive experiments on three public corpuses, EmoDB, eNTERFACE and AFEW 4.0. The experimental results show that our proposed method can achieve better performance compared with the state-of-the-art methods in the cross-corpus SER tasks.