The search functionality is under construction.

Keyword Search Result

[Keyword] facial expression(22hit)

1-20hit(22hit)

  • A Novel Transferable Sparse Regression Method for Cross-Database Facial Expression Recognition

    Wenjing ZHANG  Peng SONG  Wenming ZHENG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2021/10/12
      Vol:
    E105-D No:1
      Page(s):
    184-188

    In this letter, we propose a novel transferable sparse regression (TSR) method, for cross-database facial expression recognition (FER). In TSR, we firstly present a novel regression function to regress the data into a latent representation space instead of a strict binary label space. To further alleviate the influence of outliers and overfitting, we impose a row sparsity constraint on the regression term. And a pairwise relation term is introduced to guide the feature transfer learning. Secondly, we design a global graph to transfer knowledge, which can well preserve the cross-database manifold structure. Moreover, we introduce a low-rank constraint on the graph regularization term to uncover additional structural information. Finally, several experiments are conducted on three popular facial expression databases, and the results validate that the proposed TSR method is superior to other non-deep and deep transfer learning methods.

  • Unconstrained Facial Expression Recognition Based on Feature Enhanced CNN and Cross-Layer LSTM

    Ying TONG  Rui CHEN  Ruiyu LIANG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2020/07/30
      Vol:
    E103-D No:11
      Page(s):
    2403-2406

    LSTM network have shown to outperform in facial expression recognition of video sequence. In view of limited representation ability of single-layer LSTM, a hierarchical attention model with enhanced feature branch is proposed. This new network architecture consists of traditional VGG-16-FACE with enhanced feature branch followed by a cross-layer LSTM. The VGG-16-FACE with enhanced branch extracts the spatial features as well as the cross-layer LSTM extracts the temporal relations between different frames in the video. The proposed method is evaluated on the public emotion databases in subject-independent and cross-database tasks and outperforms state-of-the-art methods.

  • Robust Transferable Subspace Learning for Cross-Corpus Facial Expression Recognition

    Dongliang CHEN  Peng SONG  Wenjing ZHANG  Weijian ZHANG  Bingui XU  Xuan ZHOU  

     
    LETTER-Pattern Recognition

      Pubricized:
    2020/07/20
      Vol:
    E103-D No:10
      Page(s):
    2241-2245

    In this letter, we propose a novel robust transferable subspace learning (RTSL) method for cross-corpus facial expression recognition. In this method, on one hand, we present a novel distance metric algorithm, which jointly considers the local and global distance distribution measure, to reduce the cross-corpus mismatch. On the other hand, we design a label guidance strategy to improve the discriminate ability of subspace. Thus, the RTSL is much more robust to the cross-corpus recognition problem than traditional transfer learning methods. We conduct extensive experiments on several facial expression corpora to evaluate the recognition performance of RTSL. The results demonstrate the superiority of the proposed method over some state-of-the-art methods.

  • A Novel Supervised Bimodal Emotion Recognition Approach Based on Facial Expression and Body Gesture

    Jingjie YAN  Guanming LU  Xiaodong BAI  Haibo LI  Ning SUN  Ruiyu LIANG  

     
    LETTER-Image

      Vol:
    E101-A No:11
      Page(s):
    2003-2006

    In this letter, we propose a supervised bimodal emotion recognition approach based on two important human emotion modalities including facial expression and body gesture. A effectively supervised feature fusion algorithms named supervised multiset canonical correlation analysis (SMCCA) is presented to established the linear connection between three sets of matrices, which contain the feature matrix of two modalities and their concurrent category matrix. The test results in the bimodal emotion recognition of the FABO database show that the SMCCA algorithm can get better or considerable efficiency than unsupervised feature fusion algorithm covering canonical correlation analysis (CCA), sparse canonical correlation analysis (SCCA), multiset canonical correlation analysis (MCCA) and so on.

  • A Novel Bimodal Emotion Database from Physiological Signals and Facial Expression

    Jingjie YAN  Bei WANG  Ruiyu LIANG  

     
    LETTER-Multimedia Pattern Processing

      Pubricized:
    2018/04/17
      Vol:
    E101-D No:7
      Page(s):
    1976-1979

    In this paper, we establish a novel bimodal emotion database from physiological signals and facial expression, which is named as PSFE. The physiological signals and facial expression of the PSFE database are respectively recorded by the equipment of the BIOPAC MP 150 and the Kinect for Windows in the meantime. The PSFE database altogether records 32 subjects which include 11 women and 21 man, and their age distribution is from 20 to 25. Moreover, the PSFE database records three basic emotion classes containing calmness, happiness and sadness, which respectively correspond to the neutral, positive and negative emotion state. The general sample number of the PSFE database is 288 and each emotion class contains 96 samples.

  • Multicultural Facial Expression Recognition Based on Differences of Western-Caucasian and East-Asian Facial Expressions of Emotions

    Gibran BENITEZ-GARCIA  Tomoaki NAKAMURA  Masahide KANEKO  

     
    PAPER-Machine Vision and its Applications

      Pubricized:
    2018/02/16
      Vol:
    E101-D No:5
      Page(s):
    1317-1324

    An increasing number of psychological studies have demonstrated that the six basic expressions of emotions are not culturally universal. However, automatic facial expression recognition (FER) systems disregard these findings and assume that facial expressions are universally expressed and recognized across different cultures. Therefore, this paper presents an analysis of Western-Caucasian and East-Asian facial expressions of emotions based on visual representations and cross-cultural FER. The visual analysis builds on the Eigenfaces method, and the cross-cultural FER combines appearance and geometric features by extracting Local Fourier Coefficients (LFC) and Facial Fourier Descriptors (FFD) respectively. Furthermore, two possible solutions for FER under multicultural environments are proposed. These are based on an early race detection, and independent models for culture-specific facial expressions found by the analysis evaluation. HSV color quantization combined with LFC and FFD compose the feature extraction for race detection, whereas culture-independent models of anger, disgust and fear are analyzed for the second solution. All tests were performed using Support Vector Machines (SVM) for classification and evaluated using five standard databases. Experimental results show that both solutions overcome the accuracy of FER systems under multicultural environments. However, the approach which individually considers the culture-specific facial expressions achieved the highest recognition rate.

  • A Joint Convolutional Bidirectional LSTM Framework for Facial Expression Recognition

    Jingwei YAN  Wenming ZHENG  Zhen CUI  Peng SONG  

     
    LETTER-Biocybernetics, Neurocomputing

      Pubricized:
    2018/01/11
      Vol:
    E101-D No:4
      Page(s):
    1217-1220

    Facial expressions are generated by the actions of the facial muscles located at different facial regions. The spatial dependencies of different spatial facial regions are worth exploring and can improve the performance of facial expression recognition. In this letter we propose a joint convolutional bidirectional long short-term memory (JCBLSTM) framework to model the discriminative facial textures and spatial relations between different regions jointly. We treat each row or column of feature maps output from CNN as individual ordered sequence and employ LSTM to model the spatial dependencies within it. Moreover, a shortcut connection for convolutional feature maps is introduced for joint feature representation. We conduct experiments on two databases to evaluate the proposed JCBLSTM method. The experimental results demonstrate that the JCBLSTM method achieves state-of-the-art performance on Multi-PIE and very competitive result on FER-2013.

  • Facial Expression Recognition via Regression-Based Robust Locality Preserving Projections

    Jingjie YAN  Bojie YAN  Ruiyu LIANG  Guanming LU  Haibo LI  Shipeng XIE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/11/06
      Vol:
    E101-D No:2
      Page(s):
    564-567

    In this paper, we present a novel regression-based robust locality preserving projections (RRLPP) method to effectively deal with the issue of noise and occlusion in facial expression recognition. Similar to robust principal component analysis (RPCA) and robust regression (RR) approach, the basic idea of the presented RRLPP approach is also to lead in the low-rank term and the sparse term of facial expression image sample matrix to simultaneously overcome the shortcoming of the locality preserving projections (LPP) method and enhance the robustness of facial expression recognition. However, RRLPP is a nonlinear robust subspace method which can effectively describe the local structure of facial expression images. The test results on the Multi-PIE facial expression database indicate that the RRLPP method can effectively eliminate the noise and the occlusion problem of facial expression images, and it also can achieve better or comparative facial expression recognition rate compared to the non-robust and robust subspace methods meantime.

  • Maximum Volume Constrained Graph Nonnegative Matrix Factorization for Facial Expression Recognition

    Viet-Hang DUONG  Manh-Quan BUI  Jian-Jiun DING  Bach-Tung PHAM  Pham The BAO  Jia-Ching WANG  

     
    LETTER-Image

      Vol:
    E100-A No:12
      Page(s):
    3081-3085

    In this work, two new proposed NMF models are developed for facial expression recognition. They are called maximum volume constrained nonnegative matrix factorization (MV_NMF) and maximum volume constrained graph nonnegative matrix factorization (MV_GNMF). They achieve sparseness from a larger simplicial cone constraint and the extracted features preserve the topological structure of the original images.

  • Analyzing Perceived Empathy Based on Reaction Time in Behavioral Mimicry

    Shiro KUMANO  Kazuhiro OTSUKA  Masafumi MATSUDA  Junji YAMATO  

     
    PAPER-Affective Computing

      Vol:
    E97-D No:8
      Page(s):
    2008-2020

    This study analyzes emotions established between people while interacting in face-to-face conversation. By focusing on empathy and antipathy, especially the process by which they are perceived by external observers, this paper aims to elucidate the tendency of their perception and from it develop a computational model that realizes the automatic inference of perceived empathy/antipathy. This paper makes two main contributions. First, an experiment demonstrates that an observer's perception of an interacting pair is affected by the time lags found in their actions and reactions in facial expressions and by whether their expressions are congruent or not. For example, a congruent but delayed reaction is unlikely to be perceived as empathy. Based on our findings, we propose a probabilistic model that relates the perceived empathy/antipathy of external observers to the actions and reactions of conversation participants. An experiment is conducted on ten conversations performed by 16 women in which the perceptions of nine external observers are gathered. The results demonstrate that timing cues are useful in improving the inference performance, especially for perceived antipathy.

  • Facial Expression Recognition Based on Sparse Locality Preserving Projection

    Jingjie YAN  Wenming ZHENG  Minghai XIN  Jingwei YAN  

     
    LETTER-Image

      Vol:
    E97-A No:7
      Page(s):
    1650-1653

    In this letter, a new sparse locality preserving projection (SLPP) algorithm is developed and applied to facial expression recognition. In comparison with the original locality preserving projection (LPP) algorithm, the presented SLPP algorithm is able to simultaneously find the intrinsic manifold of facial feature vectors and deal with facial feature selection. This is realized by the use of l1-norm regularization in the LPP objective function, which is directly formulated as a least squares regression pattern. We use two real facial expression databases (JAFFE and Ekman's POFA) to testify the proposed SLPP method and certain experiments show that the proposed SLPP approach respectively gains 77.60% and 82.29% on JAFFE and POFA database.

  • Facial Expression Recognition Based on Facial Region Segmentation and Modal Value Approach

    Gibran BENITEZ-GARCIA  Gabriel SANCHEZ-PEREZ  Hector PEREZ-MEANA  Keita TAKAHASHI  Masahide KANEKO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E97-D No:4
      Page(s):
    928-935

    This paper presents a facial expression recognition algorithm based on segmentation of a face image into four facial regions (eyes-eyebrows, forehead, mouth and nose). In order to unify the different results obtained from facial region combinations, a modal value approach that employs the most frequent decision of the classifiers is proposed. The robustness of the algorithm is also evaluated under partial occlusion, using four different types of occlusion (half left/right, eyes and mouth occlusion). The proposed method employs sub-block eigenphases algorithm that uses the phase spectrum and principal component analysis (PCA) for feature vector estimation which is fed to a support vector machine (SVM) for classification. Experimental results show that using modal value approach improves the average recognition rate achieving more than 90% and the performance can be kept high even in the case of partial occlusion by excluding occluded parts in the feature extraction process.

  • Asymmetry in Facial Expressions as a Function of Social Skills

    Masashi KOMORI  Hiroko KAMIDE  Satoru KAWAMURA  Chika NAGAOKA  

     
    PAPER-Face Perception and Recognition

      Vol:
    E96-D No:3
      Page(s):
    507-513

    This study investigated the relationship between social skills and facial asymmetry in facial expressions. Three-dimensional facial landmark data of facial expressions (neutral, happy, and angry) were obtained from Japanese participants (n = 62). Following a facial expression task, each participant completed KiSS-18 (Kikuchi's Scale of Social Skills; Kikuchi, 2007). Using a generalized Procrustes analysis, faces and their mirror-reversed versions were represented as points on a hyperplane. The asymmetry of each individual face was defined as Euclidian distance between the face and its mirror reversed face on this plane. Subtraction of the asymmetry level of a neutral face of each individual from the asymmetry level of a target emotion face was defined as the index of “expression asymmetry” given by a particular emotion. Correlation coefficients of KiSS-18 scores and expression asymmetry scores were computed for both happy and angry expressions. Significant negative correlations between KiSS-18 scores and expression asymmetries were found for both expressions. Results indicate that the symmetry in facial expressions increases with higher level of social skills.

  • Facial Expression Recognition via Sparse Representation

    Ruicong ZHI  Qiuqi RUAN  Zhifei WANG  

     
    LETTER-Pattern Recognition

      Vol:
    E95-D No:9
      Page(s):
    2347-2350

    A facial components based facial expression recognition algorithm with sparse representation classifier is proposed. Sparse representation classifier is based on sparse representation and computed by L1-norm minimization problem on facial components. The features of “important” training samples are selected to represent test sample. Furthermore, fuzzy integral is utilized to fuse individual classifiers for facial components. Experiments for frontal views and partially occluded facial images show that this method is efficient and robust to partial occlusion on facial images.

  • A Robust 3D Face Recognition Algorithm Using Passive Stereo Vision

    Akihiro HAYASAKA  Koichi ITO  Takafumi AOKI  Hiroshi NAKAJIMA  Koji KOBAYASHI  

     
    PAPER

      Vol:
    E92-A No:4
      Page(s):
    1047-1055

    The recognition performance of the conventional 3D face recognition algorithm using ICP (Iterative Closest Point) is degraded for the 3D face data with expression changes. Addressing this problem, we consider the use of the expression-invariant local regions of a face. We find the expression-invariant regions through the distance analysis between 3D face data with the neutral expression and smile, and propose a robust 3D face recognition algorithm using passive stereo vision. We demonstrate efficient recognition performance of the proposed algorithm compared with the conventional ICP-based algorithm through the experiment using a stereo face image database which includes the face images with expression changes.

  • Dual Two-Dimensional Fuzzy Class Preserving Projections for Facial Expression Recognition

    Ruicong ZHI  Qiuqi RUAN  Jiying WU  

     
    LETTER-Pattern Recognition

      Vol:
    E91-D No:12
      Page(s):
    2880-2883

    This paper proposes a novel algorithm for image feature extraction-the dual two-dimensional fuzzy class preserving projections ((2D)2FCPP). The main advantages of (2D)2FCPP over two-dimensional locality preserving projections (2DLPP) are: (1) utilizing the fuzzy assignation mechanisms to construct the weight matrix, which can improve the classification results; (2) incorporating 2DLPP and alternative 2DLPP to get a more efficient dimensionality reduction method-(2D)2LPP.

  • Facial Expression Recognition by Supervised Independent Component Analysis Using MAP Estimation

    Fan CHEN  Kazunori KOTANI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E91-D No:2
      Page(s):
    341-350

    Permutation ambiguity of the classical Independent Component Analysis (ICA) may cause problems in feature extraction for pattern classification. Especially when only a small subset of components is derived from data, these components may not be most distinctive for classification, because ICA is an unsupervised method. We include a selective prior for de-mixing coefficients into the classical ICA to alleviate the problem. Since the prior is constructed upon the classification information from the training data, we refer to the proposed ICA model with a selective prior as a supervised ICA (sICA). We formulated the learning rule for sICA by taking a Maximum a Posteriori (MAP) scheme and further derived a fixed point algorithm for learning the de-mixing matrix. We investigate the performance of sICA in facial expression recognition from the aspects of both correct rate of recognition and robustness even with few independent components.

  • A Novel Feature Selection for Fuzzy Neural Networks for Personalized Facial Expression Recognition

    Dae-Jin KIM  Zeungnam BIEN  

     
    PAPER

      Vol:
    E87-A No:6
      Page(s):
    1386-1392

    This paper proposes a novel feature selection method for the fuzzy neural networks and presents an application example for 'personalized' facial expression recognition. The proposed method is shown to result in a superior performance than many existing approaches.

  • User Reactions to Anthropomorphized Interfaces

    Tomoko KODA  

     
    PAPER

      Vol:
    E86-D No:8
      Page(s):
    1369-1377

    It is still an open question whether software agents should be personified in the interface. In order to study the effects of faces and facial expressions in the interface, a series of experiments was conducted to compare subjects' responses to and evaluation of different faces and facial expressions. The experimental results obtained demonstrate that: 1) personified interfaces help users engage in a task, and are well suited for an entertainment domain; 2) people's impressions of a face in a task are different from ones of the face in isolation. Perceived intelligence of a face is determined not by the agent's appearance but by its competence; 3) there is a dichotomy between user groups which have opposite opinions about personification. Thus, agent-based interfaces should be flexible to support the diversity of users' preferences and the nature of tasks.

  • "Smartface"--A Robust Face Recognition System under Varying Facial Pose and Expression

    Osamu YAMAGUCHI  Kazuhiro FUKUI  

     
    INVITED PAPER

      Vol:
    E86-D No:1
      Page(s):
    37-44

    Face recognition provides an important means for realizing a man-machine interface and security. This paper presents "Smartface," a PC-based face recognition system using a temporal image sequence. The face recognition engine of the system employs a robust facial parts detection method and a pattern recognition algorithm which is stable against variations of facial pose and expression. The functions of Smartface include (i) screensaver with face recognition, (ii) customization of PC environment, and (iii) real-time disguising, an entertainment application. The system is operable on a portable PC with a camera and is implemented only with software; no image processing hardware is required.

1-20hit(22hit)