The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] mixed reality(6hit)

1-6hit
  • Visualization Methods for Outdoor See-Through Vision

    Takahiro TSUDA  Haruyoshi YAMAMOTO  Yoshinari KAMEDA  Yuichi OHTA  

     
    PAPER-Vision and Image

      Vol:
    E89-D No:6
      Page(s):
    1781-1789

    Visualizing occluded objects is a useful applications of Mixed Reality (MR), which we call "see-through vision." For this application, it is important to display occluded objects in such a manner that they can be recognized intuitively by the user. Here, we evaluated four visualization methods for see-through vision that can aid the user to recognize occluded objects in outdoor scenes intuitively: "elimination of occluding objects," "ground grid," "overlaying model of occluding object," and "top-down view." As we used a new handheld MR device for outdoor see-through vision, we performed subjective experiments to determine the best combination of methods. The experimental results indicated that a combination of showing the ground grid, overlaying wireframe models of occluding objects, and top-down view to be optimal, while it was not necessary to display occluding objects for outdoor see-through vision with a handheld device, because users can see them with the naked eye.

  • Visual-Dimension Interact System (VIS)

    Atsushi ONDA  Tomoyuki OKU  Eddie YU  Yoshie LEE  Ikuro CHOH  Pei-Yi CHIU  Jun OHYA  

     
    PAPER

      Vol:
    E88-D No:5
      Page(s):
    947-953

    In this paper we describe a mixed reality-supported interactive viewing enhancement museum display system: Visual-dimension Interact System (VIS). With a transparent interactive interface, the museum visitor is able to see, manipulate, and interact with the physical exhibit and its virtual information, which are overlapped on one other. Furthermore, this system provides the possibility for visitor to experience the creation process in an environment as close as possible to the real process. This has the function of assisting the viewer in understanding the exhibit and most importantly, gaining a so-to-speak hands-on experience of the creation process itself leading to a deeper understanding of it.

  • Video-Based Augmented Reality under Orthography without Euclidean Calibration

    Yongduek SEO  Ki-Sang HONG  

     
    LETTER-Multimedia Pattern Processing

      Vol:
    E87-D No:6
      Page(s):
    1601-1605

    An algorithm is developed for augmenting a real video with virtual graphics objects without computing Euclidean information. For this, we design a method of specifying the virtual camera that performs Euclidean orthographic projection in recovered affine space. In addition, our method has the capability of generating views of objects shaded by virtual light sources. Our novel formulation and experimental results are presented.

  • Real-Time Camera Parameter Estimation for 3-D Annotation on a Wearable Vision System

    Takashi OKUMA  Takeshi KURATA  Katsuhiko SAKAUE  

     
    PAPER

      Vol:
    E84-D No:12
      Page(s):
    1668-1675

    In this paper, we describe a method for estimating external camera parameters in real time. We investigated the effectiveness of this method for annotating real scenes with 3-D virtual objects on a wearable computer. The proposed method enables determining known natural feature points of objects through multiplied color histogram matching and template matching. This external-camera-parameter calculation method consists of three algorithms for PnP problems, and it uses each algorithm selectively. We implemented an experimental system based on our method on a wearable vision system. This experimental system can annotate real objects with 3D virtual objects by using the proposed method. The system was implemented in order to enable effective annotation in a mixed-reality environment on a wearable computing system. The system consists of an ultra small CCD camera set at the user's eye, an ultra small display, and a computer. This computer uses the proposed method to determine the camera parameters. It then renders virtual objects based on the camera parameters and synthesizes images on a display. The system works at 10 frames per second.

  • Digital Media Information Base

    Shunsuke UEMURA  Hiroshi ARISAWA  Masatoshi ARIKAWA  Yasushi KIYOKI  

     
    REVIEW PAPER

      Vol:
    E82-D No:1
      Page(s):
    22-33

    This paper surveys recent research activities on three major areas of digital media information base, namely, video database systems as a typical example of temporal application, database systems for mixed reality as an instance of spatial application, and kansei management for digital media retrieval as a case of humanistic feelings application. Current research results by the project Advanced Database Systems for Integration of Media and User Environments are reported.

  • A Taxonomy of Mixed Reality Visual Displays

    Paul MILGRAM  Fumio KISHINO  

     
    INVITED PAPER

      Vol:
    E77-D No:12
      Page(s):
    1321-1329

    This paper focuses on Mixed Reality (MR) visual displays, a particular subset of Virtual Reality (VR) related technologies that involve the merging of real and virtual worlds somewhere along the virtuality continuum" which connects completely real environments to completely virtual ones. Probably the best known of these is Augmented Reality (AR), which refers to all cases in which the display of an otherwise real environment is augmented by means of virtual (computer graphic) objects. The converse case on the virtuality continuum is therefore Augmented Virtuality (AV). Six classes of hybrid MR display environments are identified. However, an attempt to distinguish these classes on the basis of whether they are primarily video or computer graphics based, whether the real world is viewed directly or via some electronic display medium, whether the viewer is intended to feel part of the world or on the outside looking in, and whether or not the scale of the display is intended to map orthoscopically onto the real world leads to quite different groupings among the six identified classes, thereby demonstrating the need for an efficient taxonomy, or classification framework, according to which essential differences can be identified. The obvious' distinction between the terms real" and virtual" is shown to have a number of different aspects, depending on whether one is dealing with real or virtual objects, real or virtual images, and direct or non-direct viewing of these. An (approximately) three dimensional taxonomy is proposed, comprising the following dimensions: Extent of World Knowledge (how much do we know about the world being displayed?"), Reproduction Fidelity (how realistically' are we able to display it?"), and Extent of Presence Metaphor (what is the extent of the illusion that the observer is present within that world?").