The search functionality is under construction.

Author Search Result

[Author] Masashi NISHIYAMA(3hit)

1-3hit
  • Embedding the Awareness State and Response State in an Image-Based Avatar to Start Natural User Interaction

    Tsubasa MIYAUCHI  Ayato ONO  Hiroki YOSHIMURA  Masashi NISHIYAMA  Yoshio IWAI  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2017/09/08
      Vol:
    E100-D No:12
      Page(s):
    3045-3049

    We propose a method for embedding the awareness state and response state in an image-based avatar to smoothly and automatically start an interaction with a user. When both states are not embedded, the image-based avatar can become non-responsive or slow to respond. To consider the beginning of an interaction, we observed the behaviors between a user and receptionist in an information center. Our method replayed the behaviors of the receptionist at appropriate times in each state of the image-based avatar. Experimental results demonstrate that, at the beginning of the interaction, our method for embedding the awareness state and response state increased subjective scores more than not embedding the states.

  • Temporal and Spatial Analysis of Local Body Sway Movements for the Identification of People

    Takuya KAMITANI  Hiroki YOSHIMURA  Masashi NISHIYAMA  Yoshio IWAI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/10/09
      Vol:
    E102-D No:1
      Page(s):
    165-174

    We propose a method for accurately identifying people using temporal and spatial changes in local movements measured from video sequences of body sway. Existing methods identify people using gait features that mainly represent the large swinging of the limbs. The use of gait features introduces a problem in that the identification performance decreases when people stop walking and maintain an upright posture. To extract informative features, our method measures small swings of the body, referred to as body sway. We extract the power spectral density as a feature from local body sway movements by dividing the body into regions. To evaluate the identification performance using our method, we collected three original video datasets of body sway sequences. The first dataset contained a large number of participants in an upright posture. The second dataset included variation over the long term. The third dataset represented body sway in different postures. The results on the datasets confirmed that our method using local movements measured from body sway can extract informative features for identification.

  • Gender Recognition Using a Gaze-Guided Self-Attention Mechanism Robust Against Background Bias in Training Samples

    Masashi NISHIYAMA  Michiko INOUE  Yoshio IWAI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2021/11/18
      Vol:
    E105-D No:2
      Page(s):
    415-426

    We propose an attention mechanism in deep learning networks for gender recognition using the gaze distribution of human observers when they judge the gender of people in pedestrian images. Prevalent attention mechanisms spatially compute the correlation among values of all cells in an input feature map to calculate attention weights. If a large bias in the background of pedestrian images (e.g., test samples and training samples containing different backgrounds) is present, the attention weights learned using the prevalent attention mechanisms are affected by the bias, which in turn reduces the accuracy of gender recognition. To avoid this problem, we incorporate an attention mechanism called gaze-guided self-attention (GSA) that is inspired by human visual attention. Our method assigns spatially suitable attention weights to each input feature map using the gaze distribution of human observers. In particular, GSA yields promising results even when using training samples with the background bias. The results of experiments on publicly available datasets confirm that our GSA, using the gaze distribution, is more accurate in gender recognition than currently available attention-based methods in the case of background bias between training and test samples.