The search functionality is under construction.

Author Search Result

[Author] Akio NAKAMURA(4hit)

1-4hit
  • Interactive Object Recognition through Hypothesis Generation and Confirmation

    Md. Altab HOSSAIN  Rahmadi KURNIA  Akio NAKAMURA  Yoshinori KUNO  

     
    PAPER-Interactive Systems

      Vol:
    E89-D No:7
      Page(s):
    2197-2206

    An effective human-robot interaction is essential for wide penetration of service robots into the market. Such robot needs a vision system to recognize objects. It is, however, difficult to realize vision systems that can work in various conditions. More robust techniques of object recognition and image segmentation are essential. Thus, we have proposed to use the human user's assistance for objects recognition through speech. This paper presents a system that recognizes objects in occlusion and/or multicolor cases using geometric and photometric analysis of images. Based on the analysis results, the system makes a hypothesis of the scene. Then, it asks the user for confirmation by describing the hypothesis. If the hypothesis is not correct, the system generates another hypothesis until it correctly understands the scene. Through experiments on a real mobile robot, we have confirmed the usefulness of the system.

  • Interactive Object Recognition System for a Helper Robot Using Photometric Invariance

    Md. Altab HOSSAIN  Rahmadi KURNIA  Akio NAKAMURA  Yoshinori KUNO  

     
    PAPER

      Vol:
    E88-D No:11
      Page(s):
    2500-2508

    We are developing a helper robot that carries out tasks ordered by the user through speech. The robot needs a vision system to recognize the objects appearing in the orders. It is, however, difficult to realize vision systems that can work in various conditions. Thus, we have proposed to use the human user's assistance through speech. When the vision system cannot achieve a task, the robot makes a speech to the user so that the natural response by the user can give helpful information for its vision system. Our previous system assumes that it can segment images without failure. However, if there are occluded objects and/or objects composed of multicolor parts, segmentation failures cannot be avoided. This paper presents an extended system that tries to recover from segmentation failures using photometric invariance. If the system is not sure about segmentation results, the system asks the user by appropriate expressions depending on the invariant values. Experimental results show the usefulness of the system.

  • Analyzing Fine Motion Considering Individual Habit for Appearance-Based Proficiency Evaluation

    Yudai MIYASHITA  Hirokatsu KATAOKA  Akio NAKAMURA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/10/18
      Vol:
    E100-D No:1
      Page(s):
    166-174

    We propose an appearance-based proficiency evaluation methodology based on fine-motion analysis. We consider the effects of individual habit in evaluating proficiency and analyze the fine motion of guitar-picking. We first extract multiple features on a large number of dense trajectories of fine motion. To facilitate analysis, we then generate a histogram of motion features using a bag-of-words model and change the number of visual words as appropriate. To remove the effects of individual habit, we extract the common principal histogram elements corresponding to experts or beginners according to discrimination's contribution rates using random forests. We finally calculate the similarity of the histograms to evaluate the proficiency of a guitar-picking motion. By optimizing the number of visual words for proficiency evaluation, we demonstrate that our method distinguishes experts from beginners with an accuracy of about 86%. Moreover, we verify experimentally that our proposed methodology can evaluate proficiency while removing the effects of individual habit.

  • Bidirectional Eye Contact for Human-Robot Communication

    Dai MIYAUCHI  Akio NAKAMURA  Yoshinori KUNO  

     
    PAPER

      Vol:
    E88-D No:11
      Page(s):
    2509-2516

    Eye contact is an effective means of controlling human communication, such as in starting communication. It seems that we can make eye contact if we simply look at each other. However, this alone does not establish eye contact. Both parties also need to be aware of being watched by the other. We propose a method of bidirectional eye contact satisfying these conditions for human-robot communication. When a human wants to start communication with a robot, he/she watches the robot. If it finds a human looking at it, the robot turns to him/her, changing its facial expressions to let him/her know its awareness of his/her gaze. When the robot wants to initiate communication with a particular person, it moves its body and face toward him/her and changes its facial expressions to make the person notice its gaze. We show several experimental results to prove the effectiveness of this method. Moreover, we present a robot that can recognize hand gestures after making eye contact with the human to show the usefulness of eye contact as a means of controlling communication.