The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Takayuki OKATANI(2hit)

1-2hit
  • A Gaze-Reactive Display for Simulating Depth-of-Field of Eyes When Viewing Scenes with Multiple Depths

    Tatsuro ORIKASA  Takayuki OKATANI  

     
    PAPER-Computer Graphics

      Pubricized:
    2015/11/30
      Vol:
    E99-D No:3
      Page(s):
    739-746

    The the depth-of-field limitation of our eyes causes out-of-focus blur in the retinal images. The blur dynamically changes whenever we change our gaze and accordingly the scene point we are looking at changes its depth. This paper proposes an image display that reproduces retinal out-of-focus blur by using a stereoscopic display and eye trackers. Its purpose is to provide the viewer with more realistic visual experiences than conventional (stereoscopic) displays. Unlike previous similar systems that track only one of the viewer's eyes to estimate the gaze depth, the proposed system tracks both eyes individually using two eye trackers and estimates the gaze depth from the convergence angle calculated by triangulation. This provides several advantages over existing schemes, such as being able to deal with scenes having multiple depths. We describe detailed implementations of the proposed system and show the results of an experiment conducted to examine its effectiveness. In the experiment, creating a scene having two depths using two LCD displays together with a half mirror, we examined how difficult it is for viewers to distinguish between the real scene and its virtual reproduction created by the proposed display system. The results of the experiment show the effectiveness of the proposed approach.

  • HHMM Based Recognition of Human Activity

    Daiki KAWANAKA  Takayuki OKATANI  Koichiro DEGUCHI  

     
    PAPER-Face, Gesture, and Action Recognition

      Vol:
    E89-D No:7
      Page(s):
    2180-2185

    In this paper, we present a method for recognition of human activity as a series of actions from an image sequence. The difficulty with the problem is that there is a chicken-egg dilemma that each action needs to be extracted in advance for its recognition but the precise extraction is only possible after the action is correctly identified. In order to solve this dilemma, we use as many models as actions of our interest, and test each model against a given sequence to find a matched model for each action occurring in the sequence. For each action, a model is designed so as to represent any activity containing the action. The hierarchical hidden Markov model (HHMM) is employed to represent the models, in which each model is composed of a submodel of the target action and submodels which can represent any action, and they are connected appropriately. Several experimental results are shown.