The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Dan MIKAMI(3hit)

1-3hit
  • Depth Range Control in Visually Equivalent Light Field 3D Open Access

    Munekazu DATE  Shinya SHIMIZU  Hideaki KIMATA  Dan MIKAMI  Yoshinori KUSACHI  

     
    INVITED PAPER-Electronic Displays

      Pubricized:
    2020/08/13
      Vol:
    E104-C No:2
      Page(s):
    52-58

    3D video contents depend on the shooting condition, which is camera positioning. Depth range control in the post-processing stage is not easy, but essential as the video from arbitrary camera positions must be generated. If light field information can be obtained, video from any viewpoint can be generated exactly and post-processing is possible. However, a light field has a huge amount of data, and capturing a light field is not easy. To compress data quantity, we proposed the visually equivalent light field (VELF), which uses the characteristics of human vision. Though a number of cameras are needed, VELF can be captured by a camera array. Since camera interpolation is made using linear blending, calculation is so simple that we can construct a ray distribution field of VELF by optical interpolation in the VELF3D display. It produces high image quality due to its high pixel usage efficiency. In this paper, we summarize the relationship between the characteristics of human vision, VELF and VELF3D display. We then propose a method to control the depth range for the observed image on the VELF3D display and discuss the effectiveness and limitations of displaying the processed image on the VELF3D display. Our method can be applied to other 3D displays. Since the calculation is just weighted averaging, it is suitable for real-time applications.

  • Enhancing Memory-Based Particle Filter with Detection-Based Memory Acquisition for Robustness under Severe Occlusion

    Dan MIKAMI  Kazuhiro OTSUKA  Shiro KUMANO  Junji YAMATO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E95-D No:11
      Page(s):
    2693-2703

    A novel enhancement for the memory-based particle filter is proposed for visual pose tracking under severe occlusions. The enhancement is the addition of a detection-based memory acquisition mechanism. The memory-based particle filter, called M-PF, is a particle filter that predicts prior distributions from past history of target state stored in memory. It can achieve high robustness against abrupt changes in movement direction and quick recovery from target loss due to occlusions. Such high performance requires sufficient past history stored in the memory. Conventionally, M-PF conducts online memory acquisition which assumes simple target dynamics without occlusions for guaranteeing high-quality histories of the target track. The requirement of memory acquisition narrows the coverage of M-PF in practice. In this paper, we propose a new memory acquisition mechanism for M-PF that well supports application in practical conditions including complex dynamics and severe occlusions. The key idea is to use a target detector that can produce additional prior distribution of the target state. We call it M-PFDMA for M-PF with detection-based memory acquisition. The detection-based prior distribution well predicts possible target position/pose even in limited-visibility conditions caused by occlusions. Such better prior distributions contribute to stable estimation of target state, which is then added to memorized data. As a result, M-PFDMA can start with no memory entries but soon achieve stable tracking even in severe conditions. Experiments confirm M-PFDMA's good performance in such conditions.

  • Extrinsic Camera Calibration of Display-Camera System with Cornea Reflections

    Kosuke TAKAHASHI  Dan MIKAMI  Mariko ISOGAWA  Akira KOJIMA  Hideaki KIMATA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/09/26
      Vol:
    E101-D No:12
      Page(s):
    3199-3208

    In this paper, we propose a novel method to extrinsically calibrate a camera to a 3D reference object that is not directly visible from the camera. We use a human cornea as a spherical mirror and calibrate the extrinsic parameters from the reflections of the reference points. The main contribution of this paper is to present a cornea-reflection-based calibration algorithm with a simple configuration: five reference points on a single plane and one mirror pose. In this paper, we derive a linear equation and obtain a closed-form solution of extrinsic calibration by introducing two ideas. The first is to model the cornea as a virtual sphere, which enables us to estimate the center of the cornea sphere from its projection. The second is to use basis vectors to represent the position of the reference points, which enables us to deal with 3D information of reference points compactly. We demonstrate the performance of the proposed method with qualitative and quantitative evaluations using synthesized and real data.