The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] human face(3hit)

1-3hit
  • Real Time Feature-Based Facial Tracking Using Lie Algebras

    Akira INOUE  Tom DRUMMOND  Roberto CIPOLLA  

     
    LETTER

      Vol:
    E84-D No:12
      Page(s):
    1733-1738

    We have developed a novel human facial tracking system that operates in real time at a video frame rate without needing any special hardware. The approach is based on the use of Lie algebra, and uses three-dimensional feature points on the targeted human face. It is assumed that the roughly estimated facial model (relative coordinates of the three-dimensional feature points) is known. First, the initial feature positions of the face are determined using a model fitting technique. Then, the tracking is operated by the following sequence: (1) capture the new video frame and render feature points to the image plane; (2) search for new positions of the feature points on the image plane; (3) get the Euclidean matrix from the moving vector and the three-dimensional information for the points; and (4) rotate and translate the feature points by using the Euclidean matrix, and render the new points on the image plane. The key algorithm of this tracker is to estimate the Euclidean matrix by using a least square technique based on Lie algebra. The resulting tracker performed very well on the task of tracking a human face.

  • Measuring Three-Dimensional Shapes of a Moving Human Face Using Photometric Stereo Method with Two Light Sources and Slit Patterns

    Hitoshi SAJI  Hiromasa NAKATANI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E80-D No:8
      Page(s):
    795-801

    In this paper, a new method for measuring three-dimensional (3D) moving facial shapes is introduced. This method uses two light sources and a slit pattern projector. First, the normal vectors at points on a face are computed by the photometric stereo method with two light sources and a conventional video camera. Next, multiple light stripes are projected onto the face with a slit pattern projector. The 3D coordinates of the points on the stripes are measured using the stereo vision algorithm. The normal vectors are then integrated within 2D finite intervals around the measured points on the stripes. The 3D curved segment within each finite interval is computed by the integration. Finally, all the curved segments are blended into the complete facial shape using a family of exponential functions. By switching the light rays at high speed, the time required for sampling data can be reduced, and the 3D shape of a moving human face at each instant can be measured.

  • Detection and Pose Estimation of Human Face with Multiple Model Images

    Akitoshi TSUKAMOTO  Chil-Woo LEE  Saburo TSUJI  

     
    PAPER

      Vol:
    E77-D No:11
      Page(s):
    1273-1280

    This paper describes a new method for pose estimation of human face moving abruptly in real world. The virtue of this method is to use a very simple calculation, disparity, among multiple model images, and not to use any facial features such as facial organs. In fact, since the disparity between input image and a model image increases monotonously in accordance with the change of facial pose, view direction, we can estimate pose of face in input image by calculating disparity among various model images of face. To overcome a weakness coming from the change of facial patterns due to facial individuality or expression, the first model image of face is detected by employing a qualitative feature model of frontal face. It contains statistical information about brightness, which are observed from a lot of facial images, and is used in model-based approach. These features are examined in everywhere of input image to calculate faceness" of the region, and a region which indicates the highest faceness" is taken as the initial model image of face. To obtain new model images for another pose of the face, some temporary model images are synthesized through texture mapping technique using a previous model image and a 3-D graphic model of face. When the pose is changed, the most appropriate region for a new model image is searched by calculating disparity using temporary model images. In this serial processes, the obtained model images are used not only as templates for tracking face in following image sequence, but also texture images for synthesizing new temporary model images. The acquired model images are accumulated in memory space and its permissible extent for rotation or scale change is evaluated. In the later of the paper, we show some experimental results about the robustness of the qualitative facial model used to detect frontal face and the pose estimation algorithm tested on a long sequence of real images including moving human face.