A method to synthesize facial caricatures with non-planar expression is proposed. Several methods have been already proposed to synthesize facial caricatures automatically, but they mainly synthesize plane facial caricatures which look somewhat monotonous. In order to generate expressive facial caricature, the image should be expressed in non-planar style, expressing the depth of the face by shading and highlighting. In this paper, a new method to express such non-planar effect in facial caricatures is proposed by blending the grayscale information of the real face image into the plane caricature. Some methods also have been proposed to generate non-planar facial caricature, but the proposed method can adjust the degree of non-planar expression by interactive evolutionary computing, so that the obtained expression is satisfied by the user based on his/her subjective criteria. Since the color of the face looks changed, when the grayscale information of the natural face image is mixed, the color information of the skin area are also set by interactive evolutionary computing. Experimental results show the high performance of the proposed method.
Tatsuya UGAI
Meiji University
Keita SATO
Meiji University
Kaoru ARAKAWA
Meiji University
Hiroshi HARASHIMA
Meiji University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Tatsuya UGAI, Keita SATO, Kaoru ARAKAWA, Hiroshi HARASHIMA, "Interactive Evolutionary System for Synthesizing Facial Caricature with Non-planar Expression" in IEICE TRANSACTIONS on Fundamentals,
vol. E97-A, no. 11, pp. 2154-2160, November 2014, doi: 10.1587/transfun.E97.A.2154.
Abstract: A method to synthesize facial caricatures with non-planar expression is proposed. Several methods have been already proposed to synthesize facial caricatures automatically, but they mainly synthesize plane facial caricatures which look somewhat monotonous. In order to generate expressive facial caricature, the image should be expressed in non-planar style, expressing the depth of the face by shading and highlighting. In this paper, a new method to express such non-planar effect in facial caricatures is proposed by blending the grayscale information of the real face image into the plane caricature. Some methods also have been proposed to generate non-planar facial caricature, but the proposed method can adjust the degree of non-planar expression by interactive evolutionary computing, so that the obtained expression is satisfied by the user based on his/her subjective criteria. Since the color of the face looks changed, when the grayscale information of the natural face image is mixed, the color information of the skin area are also set by interactive evolutionary computing. Experimental results show the high performance of the proposed method.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E97.A.2154/_p
Copy
@ARTICLE{e97-a_11_2154,
author={Tatsuya UGAI, Keita SATO, Kaoru ARAKAWA, Hiroshi HARASHIMA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Interactive Evolutionary System for Synthesizing Facial Caricature with Non-planar Expression},
year={2014},
volume={E97-A},
number={11},
pages={2154-2160},
abstract={A method to synthesize facial caricatures with non-planar expression is proposed. Several methods have been already proposed to synthesize facial caricatures automatically, but they mainly synthesize plane facial caricatures which look somewhat monotonous. In order to generate expressive facial caricature, the image should be expressed in non-planar style, expressing the depth of the face by shading and highlighting. In this paper, a new method to express such non-planar effect in facial caricatures is proposed by blending the grayscale information of the real face image into the plane caricature. Some methods also have been proposed to generate non-planar facial caricature, but the proposed method can adjust the degree of non-planar expression by interactive evolutionary computing, so that the obtained expression is satisfied by the user based on his/her subjective criteria. Since the color of the face looks changed, when the grayscale information of the natural face image is mixed, the color information of the skin area are also set by interactive evolutionary computing. Experimental results show the high performance of the proposed method.},
keywords={},
doi={10.1587/transfun.E97.A.2154},
ISSN={1745-1337},
month={November},}
Copy
TY - JOUR
TI - Interactive Evolutionary System for Synthesizing Facial Caricature with Non-planar Expression
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 2154
EP - 2160
AU - Tatsuya UGAI
AU - Keita SATO
AU - Kaoru ARAKAWA
AU - Hiroshi HARASHIMA
PY - 2014
DO - 10.1587/transfun.E97.A.2154
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E97-A
IS - 11
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - November 2014
AB - A method to synthesize facial caricatures with non-planar expression is proposed. Several methods have been already proposed to synthesize facial caricatures automatically, but they mainly synthesize plane facial caricatures which look somewhat monotonous. In order to generate expressive facial caricature, the image should be expressed in non-planar style, expressing the depth of the face by shading and highlighting. In this paper, a new method to express such non-planar effect in facial caricatures is proposed by blending the grayscale information of the real face image into the plane caricature. Some methods also have been proposed to generate non-planar facial caricature, but the proposed method can adjust the degree of non-planar expression by interactive evolutionary computing, so that the obtained expression is satisfied by the user based on his/her subjective criteria. Since the color of the face looks changed, when the grayscale information of the natural face image is mixed, the color information of the skin area are also set by interactive evolutionary computing. Experimental results show the high performance of the proposed method.
ER -