Takaaki AKIMOTO Yasuhito SUENAGA
This paper presents an automatic creation method of 3D facial models which are needed for facial image generation by 3D computer graphics. A 3D facial model of a specific person is obtained from just the front and side view images without any human operation. The method has two parts; feature extraction and generic model modification. In the feature extraction part, the regions or edges which express the facial features such as eyes, nose, mouth or chin outline are extracted from the front and side view images. A generic head model is then modified based on the position and shape of the extracted facial features in the generic model modification part. As a result, a 3D model for persons is obtained. By using the specific model and the front and side view images, texture-mapped facial images can be generated easily.
The development of computers capable of handling complex objects requires nonverbal interfaces that can bidirectionally mediate nonverbal communication including the gestures of both people and computers. Nonverbal expressions are poweful media for enriching and facilitating humancomputer interaction when used as interface languages. Four gestural modes are appropriate for human-computer interaction: the sign, indication, illustration and manipulation modes. All these modes can be conveyed by a generalized gesture interface that has specific processors for each mode. The basic component of the generalized gesture interface, a gesture dictionary, is proposed. The dictionary can accept sign and indicating gestures in which postures or body shapes are significant, pass their meaning to a computer and display gestures from the computer. For this purpose it converts body shapes into gestural codes by means of two code systems and, moreover, it performs bidirectional conversions of several gesture representations. This dictionary is applied to the translation of Japanese into sign language; it displays an actor who speaks the given Japanese sentences by gesture of sign words and finger alphabets. The performance of this application confirms the adequacy and usefulness of the gesture dictionary.
It is known that the problem of finding a largest common subgraph is NP-hard for general graphs even if the number of input graphs is two. It is also known that the problem can be solved in polynomial time if the input is restricted to two trees. In this paper, a randomized parallel (an RNC) algorithm for finding a largest common subtree of two trees is presented. The dynamic tree contraction technique and the RNC minimum weight perfect matching algorithm are used to obtain the RNC algorithm. Moreover, an efficient NC algorithm is presented in the case where input trees are of bounded vertex degree. It works in O(log(n1)log(n2)) time using O(n1n2) processors on a CREW PRAM, where n1 and n2 denote the numbers of vertices of input trees. It is also proved that the problem is NP-hard if the number of input trees is more than two. The three dimensional matching problem, a well known NP-complete problem, is reduced to the problem of finding a largest common subtree of three trees.