A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Hiroki MORI, Koh OHSHIMA, "Facial Expression Generation from Speaker's Emotional States in Daily Conversation" in IEICE TRANSACTIONS on Information,
vol. E91-D, no. 6, pp. 1628-1633, June 2008, doi: 10.1093/ietisy/e91-d.6.1628.
Abstract: A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.
URL: https://global.ieice.org/en_transactions/information/10.1093/ietisy/e91-d.6.1628/_p
Copy
@ARTICLE{e91-d_6_1628,
author={Hiroki MORI, Koh OHSHIMA, },
journal={IEICE TRANSACTIONS on Information},
title={Facial Expression Generation from Speaker's Emotional States in Daily Conversation},
year={2008},
volume={E91-D},
number={6},
pages={1628-1633},
abstract={A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.},
keywords={},
doi={10.1093/ietisy/e91-d.6.1628},
ISSN={1745-1361},
month={June},}
Copy
TY - JOUR
TI - Facial Expression Generation from Speaker's Emotional States in Daily Conversation
T2 - IEICE TRANSACTIONS on Information
SP - 1628
EP - 1633
AU - Hiroki MORI
AU - Koh OHSHIMA
PY - 2008
DO - 10.1093/ietisy/e91-d.6.1628
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E91-D
IS - 6
JA - IEICE TRANSACTIONS on Information
Y1 - June 2008
AB - A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.
ER -