The search functionality is under construction.

Author Search Result

[Author] Yoshinori KUSACHI(2hit)

1-2hit
  • Stabilization Technique for Region-of-Interest Trajectories Made from Video Watching Manipulations

    Daisuke OCHI  Hideaki KIMATA  Yoshinori KUSACHI  Kosuke TAKAHASHI  Akira KOJIMA  

     
    PAPER-Human-computer Interaction

      Vol:
    E97-D No:2
      Page(s):
    266-274

    Due to the recent progress made in camera and network environments, on-line video services enable people around the world to watch or share high-quality HD videos that can record a wider angle without losing objects' details in each image. As a result, users of these services can watch videos in different ways with different ROIs (Regions of Interest), especially when there are multiple objects in a scene, and thus there are few common ways for them to transfer their impressions for each scene directly. Posting messages is currently the usual way but it does not sufficiently enable all users to transfer their impressions. To transfer a user's impressions directly and provide users with a richer video watching experience, we propose a system that enables them to extract their favorite parts of videos as ROI trajectories through simple and intuitive manipulation of their tablet device. It also enables them to share a recorded trajectory with others after stabilizing it in a manner that should be satisfactory to every user. Using statistical analysis of user manipulations, we have demonstrated an approach to trajectory stabilization that can eliminate undesirable or uncomfortable elements due to tablet-specific manipulations. The system's validity has been confirmed by subjective evaluations.

  • Depth Range Control in Visually Equivalent Light Field 3D Open Access

    Munekazu DATE  Shinya SHIMIZU  Hideaki KIMATA  Dan MIKAMI  Yoshinori KUSACHI  

     
    INVITED PAPER-Electronic Displays

      Pubricized:
    2020/08/13
      Vol:
    E104-C No:2
      Page(s):
    52-58

    3D video contents depend on the shooting condition, which is camera positioning. Depth range control in the post-processing stage is not easy, but essential as the video from arbitrary camera positions must be generated. If light field information can be obtained, video from any viewpoint can be generated exactly and post-processing is possible. However, a light field has a huge amount of data, and capturing a light field is not easy. To compress data quantity, we proposed the visually equivalent light field (VELF), which uses the characteristics of human vision. Though a number of cameras are needed, VELF can be captured by a camera array. Since camera interpolation is made using linear blending, calculation is so simple that we can construct a ray distribution field of VELF by optical interpolation in the VELF3D display. It produces high image quality due to its high pixel usage efficiency. In this paper, we summarize the relationship between the characteristics of human vision, VELF and VELF3D display. We then propose a method to control the depth range for the observed image on the VELF3D display and discuss the effectiveness and limitations of displaying the processed image on the VELF3D display. Our method can be applied to other 3D displays. Since the calculation is just weighted averaging, it is suitable for real-time applications.