The search functionality is under construction.

Author Search Result

[Author] Jingjie YAN(6hit)

1-6hit
  • A Novel Bimodal Emotion Database from Physiological Signals and Facial Expression

    Jingjie YAN  Bei WANG  Ruiyu LIANG  

     
    LETTER-Multimedia Pattern Processing

      Pubricized:
    2018/04/17
      Vol:
    E101-D No:7
      Page(s):
    1976-1979

    In this paper, we establish a novel bimodal emotion database from physiological signals and facial expression, which is named as PSFE. The physiological signals and facial expression of the PSFE database are respectively recorded by the equipment of the BIOPAC MP 150 and the Kinect for Windows in the meantime. The PSFE database altogether records 32 subjects which include 11 women and 21 man, and their age distribution is from 20 to 25. Moreover, the PSFE database records three basic emotion classes containing calmness, happiness and sadness, which respectively correspond to the neutral, positive and negative emotion state. The general sample number of the PSFE database is 288 and each emotion class contains 96 samples.

  • Prior Information Based Decomposition and Reconstruction Learning for Micro-Expression Recognition

    Jinsheng WEI  Haoyu CHEN  Guanming LU  Jingjie YAN  Yue XIE  Guoying ZHAO  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/07/13
      Vol:
    E106-D No:10
      Page(s):
    1752-1756

    Micro-expression recognition (MER) draws intensive research interest as micro-expressions (MEs) can infer genuine emotions. Prior information can guide the model to learn discriminative ME features effectively. However, most works focus on researching the general models with a stronger representation ability to adaptively aggregate ME movement information in a holistic way, which may ignore the prior information and properties of MEs. To solve this issue, driven by the prior information that the category of ME can be inferred by the relationship between the actions of facial different components, this work designs a novel model that can conform to this prior information and learn ME movement features in an interpretable way. Specifically, this paper proposes a Decomposition and Reconstruction-based Graph Representation Learning (DeRe-GRL) model to efectively learn high-level ME features. DeRe-GRL includes two modules: Action Decomposition Module (ADM) and Relation Reconstruction Module (RRM), where ADM learns action features of facial key components and RRM explores the relationship between these action features. Based on facial key components, ADM divides the geometric movement features extracted by the graph model-based backbone into several sub-features, and learns the map matrix to map these sub-features into multiple action features; then, RRM learns weights to weight all action features to build the relationship between action features. The experimental results demonstrate the effectiveness of the proposed modules, and the proposed method achieves competitive performance.

  • Integrating Facial Expression and Body Gesture in Videos for Emotion Recognition

    Jingjie YAN  Wenming ZHENG  Minhai XIN  Jingwei YAN  

     
    LETTER-Pattern Recognition

      Vol:
    E97-D No:3
      Page(s):
    610-613

    In this letter, we research the method of using face and gesture image sequences to deal with the video-based bimodal emotion recognition problem, in which both Harris plus cuboids spatio-temporal feature (HST) and sparse canonical correlation analysis (SCCA) fusion method are applied to this end. To efficaciously pick up the spatio-temporal features, we adopt the Harris 3D feature detector proposed by Laptev and Lindeberg to find the points from both face and gesture videos, and then apply the cuboids feature descriptor to extract the facial expression and gesture emotion features [1],[2]. To further extract the common emotion features from both facial expression feature set and gesture feature set, the SCCA method is applied and the extracted emotion features are used for the biomodal emotion classification, where the K-nearest neighbor classifier and the SVM classifier are respectively used for this purpose. We test this method on the biomodal face and body gesture (FABO) database and the experimental results demonstrate the better recognition accuracy compared with other methods.

  • Facial Expression Recognition Based on Sparse Locality Preserving Projection

    Jingjie YAN  Wenming ZHENG  Minghai XIN  Jingwei YAN  

     
    LETTER-Image

      Vol:
    E97-A No:7
      Page(s):
    1650-1653

    In this letter, a new sparse locality preserving projection (SLPP) algorithm is developed and applied to facial expression recognition. In comparison with the original locality preserving projection (LPP) algorithm, the presented SLPP algorithm is able to simultaneously find the intrinsic manifold of facial feature vectors and deal with facial feature selection. This is realized by the use of l1-norm regularization in the LPP objective function, which is directly formulated as a least squares regression pattern. We use two real facial expression databases (JAFFE and Ekman's POFA) to testify the proposed SLPP method and certain experiments show that the proposed SLPP approach respectively gains 77.60% and 82.29% on JAFFE and POFA database.

  • Facial Expression Recognition via Regression-Based Robust Locality Preserving Projections

    Jingjie YAN  Bojie YAN  Ruiyu LIANG  Guanming LU  Haibo LI  Shipeng XIE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/11/06
      Vol:
    E101-D No:2
      Page(s):
    564-567

    In this paper, we present a novel regression-based robust locality preserving projections (RRLPP) method to effectively deal with the issue of noise and occlusion in facial expression recognition. Similar to robust principal component analysis (RPCA) and robust regression (RR) approach, the basic idea of the presented RRLPP approach is also to lead in the low-rank term and the sparse term of facial expression image sample matrix to simultaneously overcome the shortcoming of the locality preserving projections (LPP) method and enhance the robustness of facial expression recognition. However, RRLPP is a nonlinear robust subspace method which can effectively describe the local structure of facial expression images. The test results on the Multi-PIE facial expression database indicate that the RRLPP method can effectively eliminate the noise and the occlusion problem of facial expression images, and it also can achieve better or comparative facial expression recognition rate compared to the non-robust and robust subspace methods meantime.

  • A Novel Supervised Bimodal Emotion Recognition Approach Based on Facial Expression and Body Gesture

    Jingjie YAN  Guanming LU  Xiaodong BAI  Haibo LI  Ning SUN  Ruiyu LIANG  

     
    LETTER-Image

      Vol:
    E101-A No:11
      Page(s):
    2003-2006

    In this letter, we propose a supervised bimodal emotion recognition approach based on two important human emotion modalities including facial expression and body gesture. A effectively supervised feature fusion algorithms named supervised multiset canonical correlation analysis (SMCCA) is presented to established the linear connection between three sets of matrices, which contain the feature matrix of two modalities and their concurrent category matrix. The test results in the bimodal emotion recognition of the FABO database show that the SMCCA algorithm can get better or considerable efficiency than unsupervised feature fusion algorithm covering canonical correlation analysis (CCA), sparse canonical correlation analysis (SCCA), multiset canonical correlation analysis (MCCA) and so on.