The search functionality is under construction.

Author Search Result

[Author] Nobutaka SHIMADA(4hit)

1-4hit
  • Recognition of Shape-Changing Hand Gestures

    Mun-Ho JEONG  Yoshinori KUNO  Nobutaka SHIMADA  Yoshiaki SHIRAI  

     
    PAPER-Multimedia Pattern Processing

      Vol:
    E85-D No:10
      Page(s):
    1678-1687

    We present a method to track and recognize shape-changing hand gestures simultaneously. The switching linear model using active contour model well corresponds to temporal shapes and motions of hands. However, inference in the switching linear model is computationally intractable, and therefore the learning process cannot be performed via the exact EM (Expectation Maximization) algorithm. Thus, we present an approximate EM algorithm using a collapsing method in which some Gaussians are merged into a single Gaussian. Tracking is performed through the forward algorithm based on Kalman filtering and the collapsing method. We also present a regularized smoothing, which plays a role of reducing jump changes between the training sequences of shape vectors representing complex-variable hand shapes. The recognition process is performed by the selection of a model with the maximum likelihood from some trained models while tracking is being performed. Experiments for several shape-changing hand gestures are demonstrated.

  • Robust Face Recognition under Various Illumination Conditions

    Atsushi MATSUMOTO  Yoshiaki SHIRAI  Nobutaka SHIMADA  Takuro SAKIYAMA  Jun MIURA  

     
    PAPER-Face, Gesture, and Action Recognition

      Vol:
    E89-D No:7
      Page(s):
    2157-2163

    We propose a method of face identification under various illumination conditions. Because we use image based method for identification, the accurate position of the face is required. First, face features are detected, and the face region is determined using the features. Then, by registering the face region to the average face, the horizontal position of the face is adjusted. Finally, the size of the face region is adjusted based on the distance of two eyes determined from all input frames. If the sizes of images for all faces are normalized into one size, the face length feature is lost in the normalized face image. The face is classified into three categories according to the face length, and the subspace is generated in each category so that the face length feature is preserved. We demonstrate the effectiveness of the proposed method by experiments.

  • Construction of Latent Descriptor Space and Inference Model of Hand-Object Interactions

    Tadashi MATSUO  Nobutaka SHIMADA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/03/13
      Vol:
    E100-D No:6
      Page(s):
    1350-1359

    Appearance-based generic object recognition is a challenging problem because all possible appearances of objects cannot be registered, especially as new objects are produced every day. Function of objects, however, has a comparatively small number of prototypes. Therefore, function-based classification of new objects could be a valuable tool for generic object recognition. Object functions are closely related to hand-object interactions during handling of a functional object; i.e., how the hand approaches the object, which parts of the object and contact the hand, and the shape of the hand during interaction. Hand-object interactions are helpful for modeling object functions. However, it is difficult to assign discrete labels to interactions because an object shape and grasping hand-postures intrinsically have continuous variations. To describe these interactions, we propose the interaction descriptor space which is acquired from unlabeled appearances of human hand-object interactions. By using interaction descriptors, we can numerically describe the relation between an object's appearance and its possible interaction with the hand. The model infers the quantitative state of the interaction from the object image alone. It also identifies the parts of objects designed for hand interactions such as grips and handles. We demonstrate that the proposed method can unsupervisedly generate interaction descriptors that make clusters corresponding to interaction types. And also we demonstrate that the model can infer possible hand-object interactions.

  • Recognition of Two-Hand Gestures Using Coupled Switching Linear Model

    Mun-Ho JEONG  Yoshinori KUNO  Nobutaka SHIMADA  Yoshiaki SHIRAI  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:8
      Page(s):
    1416-1425

    We present a method for recognition of two-hand gestures. Two-hand gestures include fine-grain descriptions of hands under a complicated background, and have complex dynamic behaviors. Hence, assuming that two-hand gestures are an interacting process of two hands whose shapes and motions are described by switching linear dynamics, we propose a coupled switching linear dynamic model to capture interactions between both hands. The parameters of the model are learned via EM algorithm using approximate computations. Recognition is performed by selection of the model with maximum likelihood out of a few learned models during tracking. We confirmed the effectiveness of the proposed model in tracking and recognition of two-hand gestures through some experiments.