The search functionality is under construction.

Keyword Search Result

[Keyword] reality(68hit)

1-20hit(68hit)

  • Virtual Reality Campuses as New Educational Metaverses

    Katashi NAGAO  

     
    INVITED PAPER

      Pubricized:
    2022/10/13
      Vol:
    E106-D No:2
      Page(s):
    93-100

    This paper focuses on the potential value and future prospects of using virtual reality (VR) technology in online education. In detailing online education and the latest VR technology, we focus on metaverse construction and artificial intelligence (AI) for educational VR use. In particular, we describe a virtual university campus in which on-demand VR lectures are conducted in virtual lecture halls, automated evaluations of student learning and training using machine learning, and the linking of multiple digital campuses.

  • Spy in Your Eye: Spycam Attack via Open-Sided Mobile VR Device

    Jiyeon LEE  Kilho LEE  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2022/07/22
      Vol:
    E105-D No:10
      Page(s):
    1817-1820

    Privacy violations via spy cameras are becoming increasingly serious. With the recent advent of various smart home IoT devices, such as smart TVs and robot vacuum cleaners, spycam attacks that steal users' information are being carried out in more unpredictable ways. In this paper, we introduce a new spycam attack on a mobile WebVR environment. It is performed by a web attacker who maliciously accesses the back-facing cameras of victims' mobile devices while they are browsing the attacker's WebVR site. This has the power to allow the attacker to capture victims' surroundings even at the desired field of view through sophisticated content placement in VR scenes, resulting in serious privacy breaches for mobile VR users. In this letter, we introduce a new threat facing mobile VR and show that it practically works with major browsers in a stealthy manner.

  • Single-Image Camera Calibration for Furniture Layout Using Natural-Marker-Based Augmented Reality

    Kazumoto TANAKA  Yunchuan ZHANG  

     
    LETTER-Multimedia Pattern Processing

      Pubricized:
    2022/03/09
      Vol:
    E105-D No:6
      Page(s):
    1243-1248

    We propose an augmented-reality-based method for arranging furniture using natural markers extracted from the edges of the walls of rooms. The proposed method extracts natural markers and estimates the camera parameters from single images of rooms using deep neural networks. Experimental results show that in all the measurements, the superimposition error of the proposed method was lower than that of general marker-based methods that use practical-sized markers.

  • Acquisition of the Width of a Virtual Body through Collision Avoidance Trials

    Yoshiaki SAITO  Kazumasa KAWASHIMA  Masahito HIRAKAWA  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2021/02/02
      Vol:
    E104-D No:5
      Page(s):
    741-751

    The progress of immersive technology enables researchers and developers to construct work spaces that are freed from real-world constraints. This has motivated us to investigate the role of the human body. In this research, we examine human cognitive behaviors in obtaining an understanding of the width of their virtual body through simple yet meaningful experiments using virtual reality (VR). In the experiments, participants were modeled as an invisible board, and a spherical object was thrown at the participants to provide information for exploring the width of their invisible body. Audio and visual feedback were provided when the object came into contact with the board (body). We first explored how precisely the participants perceived the virtual body width. Next, we examined how the body perception was generated and changed as the trial proceeded when the participants tried to move right or left actively for the avoidance of collision with approaching objects. The results of the experiments indicated that the participants could become successful in avoiding collision within a limited number of trials (14 at most) under the experimental conditions. It was also found that they postponed deciding how much they should move at the beginning and then started taking evasive action earlier as they become aware of the virtual body.

  • Presenting Walking Route for VR Zombie

    Nobuchika SAKATA  Kohei KANAMORI  Tomu TOMINAGA  Yoshinori HIJIKATA  Kensuke HARADA  Kiyoshi KIYOKAWA  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2020/09/30
      Vol:
    E104-D No:1
      Page(s):
    162-173

    The aim of this study is to calculate optimal walking routes in real space for users partaking in immersive virtual reality (VR) games without compromising their immersion. To this end, we propose a navigation system to automatically determine the route to be taken by a VR user to avoid collisions with surrounding obstacles. The proposed method is evaluated by simulating a real environment. It is verified to be capable of calculating and displaying walking routes to safely guide users to their destinations without compromising their VR immersion. In addition, while walking in real space while experiencing VR content, users can choose between 6-DoF (six degrees of freedom) and 3-DoF (three degrees of freedom). However, we expect users to prefer 3-DoF conditions, as they tend to walk longer while using VR content. In dynamic situations, when two pedestrians are added to a designated computer-generated real environment, it is necessary to calculate the walking route using moving body prediction and display the moving body in virtual space to preserve immersion.

  • Anomaly Detection of Folding Operations for Origami Instruction with Single Camera

    Hiroshi SHIMANUKI  Toyohide WATANABE  Koichi ASAKURA  Hideki SATO  Taketoshi USHIAMA  

     
    PAPER-Pattern Recognition

      Pubricized:
    2020/02/25
      Vol:
    E103-D No:5
      Page(s):
    1088-1098

    When people learn a handicraft with instructional contents such as books, videos, and web pages, many of them often give up halfway because the contents do not always assure how to make it. This study aims to provide origami learners, especially beginners, with feedbacks on their folding operations. An approach for recognizing the state of the learner by using a single top-view camera, and pointing out the mistakes made during the origami folding operation is proposed. First, an instruction model that stores easy-to-follow folding operations is defined. Second, a method for recognizing the state of the learner's origami paper sheet is proposed. Third, a method for detecting mistakes made by the learner by means of anomaly detection using a one-class support vector machine (one-class SVM) classifier (using the folding progress and the difference between the learner's origami shape and the correct shape) is proposed. Because noises exist in the camera images due to shadows and occlusions caused by the learner's hands, the shapes of the origami sheet are not always extracted accurately. To train the one-class SVM classifier with high accuracy, a data cleansing method that automatically sifts out video frames with noises is proposed. Moreover, using the statistics of features extracted from the frames in a sliding window makes it possible to reduce the influence by the noises. The proposed method was experimentally demonstrated to be sufficiently accurate and robust against noises, and its false alarm rate (false positive rate) can be reduced to zero. Requiring only a single camera and common origami paper, the proposed method makes it possible to monitor mistakes made by origami learners and support their self-learning.

  • A Tile-Based Solution Using Cubemap for Viewport-Adaptive 360-degree Video Delivery

    Huyen T. T. TRAN  Duc V. NGUYEN  Nam PHAM NGOC  Truong Cong THANG  

     
    PAPER

      Pubricized:
    2019/01/22
      Vol:
    E102-B No:7
      Page(s):
    1292-1300

    360-degree video delivery in Virtual Reality is very challenging due to the fact that 360-degree videos require much higher bandwidth than conventional videos. To overcome this problem, viewport-adaptive streaming has been introduced. In this study, we propose a new adaptation method for tiling-based viewport-adaptive streaming of 360-degree videos. For content preparation, the Cubemap projection format is used, where faces or parts of a face are encoded as tiles. Also, the problem is formulated as an optimization problem, in which each visible tile is weighted based on how that tile overlaps with the viewport. To solve the problem, an approximation algorithm is proposed in this study. An evaluation of the proposed method and reference methods is carried out under different tiling schemes and bandwidths. Experiments show that the Cubemap format with tiling provides a lot of benefits in terms of storage, viewport quality across different viewing directions and bandwidths, and tolerance to prediction errors.

  • High-Speed Spelling in Virtual Reality with Sequential Hybrid BCIs

    Zhaolin YAO  Xinyao MA  Yijun WANG  Xu ZHANG  Ming LIU  Weihua PEI  Hongda CHEN  

     
    LETTER-Biological Engineering

      Pubricized:
    2018/07/25
      Vol:
    E101-D No:11
      Page(s):
    2859-2862

    A new hybrid brain-computer interface (BCI), which is based on sequential controls by eye tracking and steady-state visual evoked potentials (SSVEPs), has been proposed for high-speed spelling in virtual reality (VR) with a 40-target virtual keyboard. During target selection, gaze point was first detected by an eye-tracking accessory. A 4-target block was then selected for further target selection by a 4-class SSVEP BCI. The system can type at a speed of 1.25 character/sec in a cue-guided target selection task. Online experiments on three subjects achieved an averaged information transfer rate (ITR) of 360.7 bits/min.

  • An Efficient Acoustic Distance Rendering Algorithm for Proximity Control in Virtual Reality Systems

    Yonghyun BAEK  Tegyu LEE  Young-cheol PARK  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:12
      Page(s):
    3054-3060

    In this letter, we propose an acoustic distance rendering (ADR) algorithm that can efficiently create the proximity effect in virtual reality (VR) systems. By observing the variation of acoustic cues caused by the movement of the sound source in the near field, we develop a model that can closely approximates the near-field transfer function (NFTF). The developed model is used to efficiently compensate for the near-field effect on the head related transfer function (HRTF). The proposed algorithm is implemented and tested in the form of an audio plugin for a VR platform and the test results confirm the efficiency of the proposed algorithm.

  • Study on Compact Head-Mounted Display System Using Electro-Holography for Augmented Reality Open Access

    Eishin MURAKAMI  Yuki OGURO  Yuji SAKAMOTO  

     
    INVITED PAPER

      Vol:
    E100-C No:11
      Page(s):
    965-971

    Head-mounted displays (HMDs) and augmented reality (AR) are actively being studied. However, ordinary AR HMDs for visual assistance have a problem in which users have difficulty simultaneously focusing their eyes on both the real target object and the displayed image because the image can only be displayed at a fixed distance from an user's eyes in contrast to where the real object three-dimensionally exists. Therefore, we considered incorporating a holographic technology, an ideal three-dimensional (3D) display technology, into an AR HMD system. A few studies on holographic HMDs have had technical problems, and they have faults in size and weight. This paper proposes a compact holographic AR HMD system with the purpose of enabling an ideal 3D AR HMD system which can correctly reconstruct the image at any depth. In this paper, a Fourier transform optical system (FTOS) was implemented using only one lens in order to achieve a compact and lightweight structure, and a compact holographic AR HMD system was constructed. The experimental results showed that the proposed system can reconstruct sharp images at the correct depth for a wide depth range. This study enabled an ideal 3D AR HMD system that enables simultaneous viewing of both the real target object and the reconstructed image without feeling visual fatigue.

  • Saliency-Guided Stereo Camera Control for Comfortable VR Explorations

    Yeo-Jin YOON  Jaechun NO  Soo-Mi CHOI  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2017/06/01
      Vol:
    E100-D No:9
      Page(s):
    2245-2248

    The quality of visual comfort and depth perception is a crucial requirement for virtual reality (VR) applications. This paper investigates major causes of visual discomfort and proposes a novel virtual camera controlling method using visual saliency to minimize visual discomfort. We extract the saliency of each scene and properly adjust the convergence plane to preserve realistic 3D effects. We also evaluate the effectiveness of our method on free-form architecture models. The results indicate that the proposed saliency-guided camera control is more comfortable than typical camera control and gives more realistic depth perception.

  • Design and Comparison of Immersive Gesture Interfaces for HMD Based Virtual World Navigation

    Bong-Soo SOHN  

     
    LETTER-Computer Graphics

      Pubricized:
    2016/04/05
      Vol:
    E99-D No:7
      Page(s):
    1957-1960

    Mass-market head mounted displays (HMDs) are currently attracting a wide interest from consumers because they allow immersive virtual reality (VR) experiences at an affordable cost. Flying over a virtual environment is a common application of HMD. However, conventional keyboard- or mouse-based interfaces decrease the level of immersion. From this motivation, we design three types of immersive gesture interfaces (bird, superman, and hand) for the flyover navigation. A Kinect depth camera is used to recognize each gesture by extracting and analyzing user's body skeletons. We evaluate the usability of each interface through a user study. As a result, we analyze the advantages and disadvantages of each interface, and demonstrate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in an HMD based VR environment.

  • A Kinect-Based System for Balance Rehabilitation of Stroke Patients

    Chung-Liang LAI  Chien-Ming TSENG  D. ERDENETSOGT  Tzu-Kuan LIAO  Ya-Ling HUANG  Yung-Fu CHEN  

     
    PAPER

      Pubricized:
    2016/01/28
      Vol:
    E99-D No:4
      Page(s):
    1032-1037

    A low-cost prototypic Kinect-based rehabilitation system was developed for recovering balance capability of stroke patients. A total of 16 stroke patients were recruited to participate in the study. After excluding 3 patients who failed to finish all of the rehabilitation sessions, only the data of 13 patients were analyzed. The results exhibited a significant effect in recovering balance function of the patients after 3 weeks of balance training. Additionally, the questionnaire survey revealed that the designed system was perceived as effective and easy in operation.

  • A Novel Earthquake Education System Based on Virtual Reality

    Xiaoli GONG  Yanjun LIU  Yang JIAO  Baoji WANG  Jianchao ZHOU  Haiyang YU  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2015/09/16
      Vol:
    E98-D No:12
      Page(s):
    2242-2249

    An earthquake is a destructive natural disaster, which cannot be predicted accurately and causes devastating damage and losses. In fact, many of the damages can be prevented if people know what to do during and after earthquakes. Earthquake education is the most important method to raise public awareness and mitigate the damage caused by earthquakes. Generally, earthquake education consists of conducting traditional earthquake drills in schools or communities and experiencing an earthquake through the use of an earthquake simulator. However, these approaches are unrealistic or expensive to apply, especially in underdeveloped areas where earthquakes occur frequently. In this paper, an earthquake drill simulation system based on virtual reality (VR) technology is proposed. A User is immersed in a 3D virtual earthquake environment through a head mounted display and is able to control the avatar in a virtual scene via Kinect to respond to the simulated earthquake environment generated by SIGVerse, a simulation platform. It is a cost effective solution and is easy to deploy. The design and implementation of this VR system is proposed and a dormitory earthquake simulation is conducted. Results show that powerful earthquakes can be simulated successfully and the VR technology can be applied in the earthquake drills.

  • Implementation of an Omnidirectional Human Motion Capture System Using Multiple Kinect Sensors

    Junghwan KIM  Inwoong LEE  Jongyoo KIM  Sanghoon LEE  

     
    LETTER-Measurement Technology

      Vol:
    E98-A No:9
      Page(s):
    2004-2008

    Due to ease of implementation for various user interactive applications, much research on motion recognition has been completed using Kinect. However, one drawback of Kinect is that the skeletal information obtained is provided under the assumption that the user faces Kinect. Thus, the skeletal information is likely incorrect when the user turns his back to Kinect, which may lead to difficulty in motion recognition from the application. In this paper, we implement a highly accurate human motion capture system by installing six Kinect sensors over 360 degrees. The proposed method enables skeleton to be obtained more accurately by assigning higher weights to skeletons captured by Kinect in which the user faces forward. Toward this goal, the front vector of the user is temporally traced to determine whether the user is facing Kinect. Then, more reliable joint information is utilized to construct a skeletal representation of each user.

  • Distribution of Attention in Augmented Reality: Comparison between Binocular and Monocular Presentation Open Access

    Akihiko KITAMURA  Hiroshi NAITO  Takahiko KIMURA  Kazumitsu SHINOHARA  Takashi SASAKI  Haruhiko OKUMURA  

     
    INVITED PAPER

      Vol:
    E97-C No:11
      Page(s):
    1081-1088

    This study investigated the distribution of attention to frontal space in augmented reality (AR). We conducted two experiments to compare binocular and monocular observation when an AR image was presented. According to a previous study, when participants observed an AR image in monocular presentation, they perceived the AR image as more distant than in binocular vision. Therefore, we predicted that attention would need to be shifted between the AR image and the background in not the monocular observation but the binocular one. This would enable an observer to distribute his/her visual attention across a wider space in the monocular observation. In the experiments, participants performed two tasks concurrently to measure the size of the useful field of view (UFOV). One task was letter/number discrimination in which an AR image was presented in the central field of view (the central task). The other task was luminance change detection in which dots were presented in the peripheral field of view (the peripheral task). Depth difference existed between the AR image and the location of the peripheral task in Experiment 1 but not in Experiment 2. The results of Experiment 1 indicated that the UFOV became wider in the monocular observation than in the binocular observation. In Experiment 2, the size of the UFOV in the monocular observation was equivalent to that in the binocular observation. It becomes difficult for a participant to observe the stimuli on the background in the binocular observation when there is depth difference between the AR image and the background. These results indicate that the monocular presentation in AR is superior to binocular presentation, and even in the best condition for the binocular condition the monocular presentation is equivalent to the binocular presentation in terms of the UFOV.

  • Light Source Estimation in Mobile Augmented Reality Scenes by Using Human Face Geometry

    Emre KOC  Selim BALCISOY  

     
    PAPER-Augmented Reality

      Vol:
    E97-D No:8
      Page(s):
    1974-1982

    Light source estimation and virtual lighting must be believable in terms of appearance and correctness in augmented reality scenes. As a result of illumination complexity in an outdoor scene, realistic lighting for augmented reality is still a challenging problem. In this paper, we propose a framework based on an estimation of environmental lighting from well-defined objects, specifically human faces. The method is tuned for outdoor use, and the algorithm is further enhanced to illuminate virtual objects exposed to direct sunlight. Our model can be integrated into existing mobile augmented reality frameworks to enhance visual perception.

  • Comparison of Output Devices for Augmented Audio Reality

    Kazuhiro KONDO  Naoya ANAZAWA  Yosuke KOBAYASHI  

     
    PAPER-Speech and Hearing

      Vol:
    E97-D No:8
      Page(s):
    2114-2123

    We compared two audio output devices for augmented audio reality applications. In these applications, we plan to use speech annotations on top of the actual ambient environment. Thus, it becomes essential that these audio output devices are able to deliver intelligible speech annotation along with transparent delivery of the environmental auditory scene. Two candidate devices were compared. The first output was the bone-conduction headphone, which can deliver speech signals by vibrating the skull, while normal hearing is left intact for surrounding noise since these headphones leave the ear canals open. The other is the binaural microphone/earphone combo, which is in a form factor similar to a regular earphone, but integrates a small microphone at the ear canal entry. The input from these microphones can be fed back to the earphones along with the annotation speech. We also compared these devices to normal hearing (i.e., without headphones or earphones) for reference. We compared the speech intelligibility when competing babble noise is simultaneously given from the surrounding environment. It was found that the binaural combo can generally deliver speech signals at comparable or higher intelligibility than the bone-conduction headphones. However, with the binaural combo, we found that the ear canal transfer characteristics were altered significantly by shutting the ear canals closed with the earphones. Accordingly, if we employed a compensation filter to account for this transfer function deviation, the resultant speech intelligibility was found to be significantly higher. However, both of these devices were found to be acceptable as audio output devices for augmented audio reality applications since both are able to deliver speech signals at high intelligibility even when a significant amount of competing noise is present. In fact, both of these speech output methods were able to deliver speech signals at higher intelligibility than natural speech, especially when the SNR was low.

  • Haptically Assisting Breast Tumor Detection by Augmenting Abnormal Lump

    Seokhee JEON  

     
    LETTER-Human-computer Interaction

      Vol:
    E97-D No:2
      Page(s):
    361-365

    This paper reports the use of haptic augmented reality in breast tumor palpation. In general, lumps in the breast are stiffer than surrounding tissues, allowing us to haptically detect them through self-palpation. The goal of the study is to assist self-palpation of lumps by haptically augmenting stiffness around lumps. The key steps are to estimate non-linear stiffness of normal tissues in the offline preprocessing step, detect areas that show abnormally stiffer responses, and amplify the difference in stiffness through a haptic augmented reality interface. The performance of the system was evaluated in a user-study, demonstrating the potential of the system.

  • Depth Perception Control during Car Vibration by Hidden Images on Monocular Head-Up Display

    Tsuyoshi TASAKI  Akihisa MORIYA  Aira HOTTA  Takashi SASAKI  Haruhiko OKUMURA  

     
    PAPER-Multimedia Pattern Processing

      Vol:
    E96-D No:12
      Page(s):
    2850-2856

    A novel depth perception control method for a monocular head-up display (HUD) in a car has been developed, which is called the dynamic perspective method. The method changes a size and a position of the HUD image such as arrow for depth perception and achieves a depth perception position of 120 [m] within an error of 30% in a simulation. However, it is difficult to achieve an accurate depth perception in the real world because of car vibration. To solve this problem, we focus on a property, namely, that people complement hidden images by previous continuously observed images. We hide the image on the HUD when the car is vibrated very much. We aim to point at the accurate depth position by using see-through HUD images while having users complement the hidden image positions based on the continuous images before car vibration. We developed a car that detects big vibration by an acceleration sensor and is equipped with our monocular HUD. Our new method pointed at the depth position more accurately than the previous method, which was confirmed by t-test.

1-20hit(68hit)