The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] animation(26hit)

21-26hit(26hit)

  • Development of a Sign-Language Communication System between Korean and Japanese Using 3D Animation Techniques and Intelligent Communication Method on the Internet

    Sang-Woon KIM  Jong-Woo LEE  Yoshinao AOKI  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    996-1004

    The sign-language can be used as a communication means between avatars having no common language. As a trial to overcome the linguistic barrier, we have previously developed a 2D model-based sign-language chatting system between Korean and Japanese on the the Internet. In that system, there have been some problems to be solved for natural animation and real-time transmission. In this paper, we employ a 3D character model for stereoscopic gestures in the sign-language animation. We also utilize CG animation techniques which use the variable number of frames and a cubic spline interpolation in order to generate realistic gestures. For real-time communication, on the other hand, we make use of an intelligent communication method on a client-server architecture. We implement a preliminary communication system with Visual C++ 5.0 and Open Inventor on Windows platforms. Experimental results show a possibility that the system could be used for avatar communications between different languages.

  • Preliminary Study on a Sign-Language Chatting System between Korea and Japan for Avatar Communication on the Internet

    Sang-Woon KIM  Ji-Young OH  Shin TANAHASHI  Yoshinao AOKI  

     
    LETTER-Human Communications

      Vol:
    E83-A No:2
      Page(s):
    386-389

    In order to investigate the possibility of avatar communication using sign-language, in this paper, we develop a sign-language chatting system on the Internet using CG aniamtion techniques between Korea and Japan. We construct the system in server-client architecture, where images of Korean or Japanese sign-language are analyzed into a series of parameters for sign-language animation by server. We transmit the parameters, which are text data instead of images or their compression, to clients and regenerate the corresponding CG animation using the received data. The chatting system is implemented with Visual C++ 5.0 on Windows platforms. Experimental results show that the sign-language could be used as a communication means between avatars of different languages.

  • Disparity Mapping Technique and Fast Rendering Technique for Image Morphing

    Toshiyuki MORITSU  Makoto KATO  

     
    PAPER-Computer Graphics

      Vol:
    E83-D No:2
      Page(s):
    275-282

    We have developed a new disparity mapping technique for image morphing which prevents synthesized images from blurring and a fast rendering technique which realizes interactive morphing animation. In the image morphing rendering process, all pixels are moved according to their disparity maps and then distorted images are mixed with each other. Calculation costs of this process tend to be high because pixel per pixel moving and mixing are included. And if the accuracy of the disparity maps is low, synthesized images become blurred. This paper describes new two techniques for overcoming these problems. One is a disparity mapping technique by which the edges in each input image are accurately mapped to each other. This technique reduces blurring in synthesized images. The other is a data transformation technique by which the morphing rendering process is replaced with texture mapping, orthographic camera, α-brending and z-buffering. This transformation enables the morphing rendering process to be accelerated by 3D accelerators, thus enabling interactive morphing animations to be achieved on ordinary PCs.

  • Emotion Enhanced Face to Face Meetings Using the Concept of Virtual Space Teleconferencing

    Liyanage C. DE SILVA  Tsutomu MIYASATO  Fumio KISHINO  

     
    PAPER

      Vol:
    E79-D No:6
      Page(s):
    772-780

    Here we investigate the unique advantages of our proposed Virtual Space Teleconferencing System (VST) in the area of multimedia teleconferencing, with emphasis to facial emotion transmission and recognition. Specially, we show that this concept can be used in a unique way of communication in which the emotions of the local participant are transmitted to the remote party with higher recognition rate by enhancing the emotions using some intelligence processing in between the local and the remote participants. In other words, we can show that this kind of emotion enhanced teleconferencing systems can supersede face to face meetings, by effectively alleviating the barriers in recognizing emotions between different nations. Also in this paper we show that it is better alternative to the blurred or mosaiced facial images that one can find in some television interviews with people who are not willing to be exposed in public.

  • Visualization of Temporal and Spatial Information in Natural Language Descriptions

    Hiromi BABA  Tsukasa NOMA  Naoyuki OKADA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E79-D No:5
      Page(s):
    591-599

    This paper discusses visualization of temporal and spatial information in natural language descriptions (NLDs), focusing on the translation process of intermediate representations of NLDs to proper scenarios" and environments" for animations. First, the intermediate representations are shown according to the idea of actors. Actors and non-actors are represented as primitives of objects, whereas actions as those of events. Temporal and spatial constraints by a given NLD text are imposed upon the primitives. Then, the representations containing unknown temporal or spatial parameters --time and coordinates-- are translated into evaluation functions, where the unlikelihood of the deviations from the predicted temporal or spatial relations are estimated. Particularly, the functions concerning actor's movements contain both temporal and spatial parameters. Next, the sum of all the evaluation functions is minimized by a nonlinear optimization method. Thus, the most proper actors' time-table, or scenario, and non-actors' location-table, or environment, for visualization are obtained. Implementation and experiments show that both temporal and spatial information in NLDs are well connected through actors' movements for visualization.

  • 3D Facial Model Creation Using Generic Model and Front and Side Views of Face

    Takaaki AKIMOTO  Yasuhito SUENAGA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E75-D No:2
      Page(s):
    191-197

    This paper presents an automatic creation method of 3D facial models which are needed for facial image generation by 3D computer graphics. A 3D facial model of a specific person is obtained from just the front and side view images without any human operation. The method has two parts; feature extraction and generic model modification. In the feature extraction part, the regions or edges which express the facial features such as eyes, nose, mouth or chin outline are extracted from the front and side view images. A generic head model is then modified based on the position and shape of the extracted facial features in the generic model modification part. As a result, a 3D model for persons is obtained. By using the specific model and the front and side view images, texture-mapped facial images can be generated easily.

21-26hit(26hit)