The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Jaihie KIM(5hit)

1-5hit
  • A New Iris Recognition Method Using Independent Component Analysis

    Seung-In NOH  Kwanghyuk BAE  Kang Ryoung PARK  Jaihie KIM  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E88-D No:11
      Page(s):
    2573-2581

    In a conventional method based on quadrature 2D Gabor wavelets to extract iris features, the iris recognition is performed by a 256-byte iris code, which is computed by applying the Gabor wavelets to a given area of the iris. However, there is a code redundancy because the iris code is generated by basis functions without considering the characteristics of the iris texture. Therefore, the size of the iris code is increased unnecessarily. In this paper we propose a new feature extraction algorithm based on independent component analysis (ICA) for a compact iris code. We implemented the ICA to generate optimal basis functions which could represent iris signals efficiently. In practice the coefficients of the ICA expansions are used as feature vectors. Then iris feature vectors are encoded into the iris code for storing and comparing individual's iris patterns. Additionally, we introduce a method to refine the ICA basis functions for improving the recognition performance. Experimental results show that our proposed method has a similar equal error rate as a conventional method based on the Gabor wavelets, and the iris code size of our proposed methods is five times smaller than that of the Gabor wavelets.

  • Gaze Detection by Estimating the Depths and 3D Motion of Facial Features in Monocular Images

    Kang Ryoung PARK  Si Wook NAM  Min Suk LEE  Jaihie KIM  

    This paper was deleted on March 10, 2006 because it was found to be a duplicate submission (see details in the pdf file).
     
    PAPER-Human Communications and Ergonomics

      Vol:
    E82-A No:10
      Page(s):
    2274-2284

    This paper describes a new method for detecting the gaze position of a user on a monitor from monocular images. In order to detect the gaze position, we extract facial features (both eyes, nostrils and lip corners) automatically in 2D camera images and estimate the 3D depth information and the initial 3D positions of those features by recursive estimation algorithm in starting images. Then, when a user moves his/her head in order to gaze at one position on a monitor, the moved 3D positions of those features can be estimated from 3D motion estimation by Extended Kalman Filter (EKF) and affine transform. Finally, the gaze position on a monitor is calculated from the normal vector of the plane determined by those moved 3D positions of features. Especially, in order to obtain the exact 3D depth and positions of initial feature points, we unify three coordinate systems (face, monitor and camera coordinate system) based on perspective transformation. As experimental results, the 3D depth and the position estimation error of initial feature points, which is the RMS error between the estimated initial 3D feature positions and the real positions (measured by 3D position tracker sensor) is about 1.28 cm (0.75 cm in X axis, 0.85 cm in Y axis, 0.6 cm in Z axis) and the 3D motion estimation errors of feature points by Extended Kalman Filter (EKF) are about 3.6 degrees and 1.4 cm in rotation and translation, respectively. From that, we can obtain the gaze position on a monitor (17 inches) and the gaze position accuracy between the calculated positions and the real ones is about 2.1 inches of RMS error.

  • Gaze Point Detection by Computing the 3D Positions and 3D Motion of Face

    Kang Ryoung PARK  Jaihie KIM  

    This paper was deleted on March 10, 2006 because it was found to be a duplicate submission (see details in the pdf file).
     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:4
      Page(s):
    884-894

    Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) her face to gaze at a different position on the monitor. For our case, the user is requested not to move pupils of her eyes when she gazes at a different position on the monitor screen, though we are working on to relax this restriction. To detect the gaze position, we extract facial features (both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by recursive estimation algorithm. Then, when a user moves her head in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D motion estimation by Iterative Extended Kalman Filter (IEKF) and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. Especially, in order to obtain the exact 3D positions of initial feature points, we unify three coordinate systems (face, monitor and camera coordinate system) based on perspective transformation. As experimental results, the 3D position estimation error of initial feature points, which is the RMS error between the estimated initial 3D feature positions and the real positions (measured by 3D position tracker sensor) is about 1.28 cm (0.75 cm in X axis, 0.85 cm in Y axis, 0.6 cm in Z axis) and the 3D motion estimation errors of feature points by Iterative Extended Kalman Filter (IEKF) are about 2.8 degrees and 1.21 cm in rotation and translation, respectively. From that, we can obtain the gaze position on a monitor (17 inches) and the gaze position accuracy between the calculated positions and the real ones is about 2.06 inches of RMS error.

  • A Temporal Data Maintenance Method in an ATMS

    MinSuk LEE  YeungGyu PARK  ChoongShik PARK  Jaihie KIM  

     
    LETTER-Artificial Intelligence, Cognitive Science

      Vol:
    E83-D No:2
      Page(s):
    295-298

    An ATMS (Assumption-based Truth Maintenance System) has been widely used for maintaining the truth of an information by detecting and solving the contradictions in rule-based systems. However, the ATMS cannot correctly maintain the truth of the information in case that the generated information is satisfied within a time interval or includes data about temporal relations of events in time varying situations, because it has no mechanism manipulating temporal data. In this paper, we propose the extended ATMS that can maintain the truth of the information in the knowledge-based system using information changing over time or temporal relations of events. To maintain the contexts generated by relations of events, we modify the label representation method, the disjunction and conjunction simplification method in the label-propagation procedure and the nogood handling method of the conventional ATMS.

  • Real-Time Facial and Eye Gaze Tracking System

    Kang Ryoung PARK  Jaihie KIM  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E88-D No:6
      Page(s):
    1231-1238

    The goal of gaze detection is to locate the position (on a monitor) where a user is looking. Previous researches use one wide view camera, which can capture the user's entire face. However, the image resolution is too low with such a camera and the fine movements of user's eye cannot be exactly detected. So, we propose the new gaze detection system with dual cameras (a wide and a narrow view camera). In order to locate the user's eye position accurately, the narrow-view camera has the functionalities of auto focusing/panning/tilting based on the detected 3D eye positions from the wide view camera. In addition, we use the IR-LED illuminators for wide and narrow view camera, which can ease the detecting of facial features, pupil and iris position. To overcome the problem of specular reflection on glasses by illuminator, we use dual IR-LED illuminators for wide and narrow view camera and detect the accurate eye position, which is not hidden by the specular reflection. Experimental results show that the gaze detection error between the computed positions and the real ones is about 2.89 cm of RMS error.