The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Koji MURAKOSO(1hit)

1-1hit
  • Human Pose Annotation Using a Motion Capture System for Loose-Fitting Clothes

    Takuya MATSUMOTO  Kodai SHIMOSATO  Takahiro MAEDA  Tatsuya MURAKAMI  Koji MURAKOSO  Kazuhiko MINO  Norimichi UKITA  

     
    PAPER

      Pubricized:
    2020/03/30
      Vol:
    E103-D No:6
      Page(s):
    1257-1264

    This paper proposes a framework for automatically annotating the keypoints of a human body in images for learning 2D pose estimation models. Ground-truth annotations for supervised learning are difficult and cumbersome in most machine vision tasks. While considerable contributions in the community provide us a huge number of pose-annotated images, all of them mainly focus on people wearing common clothes, which are relatively easy to annotate the body keypoints. This paper, on the other hand, focuses on annotating people wearing loose-fitting clothes (e.g., Japanese Kimono) that occlude many body keypoints. In order to automatically and correctly annotate these people, we divert the 3D coordinates of the keypoints observed without loose-fitting clothes, which can be captured by a motion capture system (MoCap). These 3D keypoints are projected to an image where the body pose under loose-fitting clothes is similar to the one captured by the MoCap. Pose similarity between bodies with and without loose-fitting clothes is evaluated with 3D geometric configurations of MoCap markers that are visible even with loose-fitting clothes (e.g., markers on the head, wrists, and ankles). Experimental results validate the effectiveness of our proposed framework for human pose estimation.