The search functionality is under construction.

Author Search Result

[Author] Haruo TAKEMURA(4hit)

1-4hit
  • Subjective Difficulty Estimation of Educational Comics Using Gaze Features

    Kenya SAKAMOTO  Shizuka SHIRAI  Noriko TAKEMURA  Jason ORLOSKY  Hiroyuki NAGATAKI  Mayumi UEDA  Yuki URANISHI  Haruo TAKEMURA  

     
    PAPER-Educational Technology

      Pubricized:
    2023/02/03
      Vol:
    E106-D No:5
      Page(s):
    1038-1048

    This study explores significant eye-gaze features that can be used to estimate subjective difficulty while reading educational comics. Educational comics have grown rapidly as a promising way to teach difficult topics using illustrations and texts. However, comics include a variety of information on one page, so automatically detecting learners' states such as subjective difficulty is difficult with approaches such as system log-based detection, which is common in the Learning Analytics field. In order to solve this problem, this study focused on 28 eye-gaze features, including the proposal of three new features called “Variance in Gaze Convergence,” “Movement between Panels,” and “Movement between Tiles” to estimate two degrees of subjective difficulty. We then ran an experiment in a simulated environment using Virtual Reality (VR) to accurately collect gaze information. We extracted features in two unit levels, page- and panel-units, and evaluated the accuracy with each pattern in user-dependent and user-independent settings, respectively. Our proposed features achieved an average F1 classification-score of 0.721 and 0.742 in user-dependent and user-independent models at panel unit levels, respectively, trained by a Support Vector Machine (SVM).

  • A High Presence Shared Space Communication System Using 2D Background and 3D Avatar

    Kyohei YOSHIKAWA  Takashi MACHIDA  Kiyoshi KIYOKAWA  Haruo TAKEMURA  

     
    INVITED PAPER

      Vol:
    E87-D No:12
      Page(s):
    2532-2539

    Displaying a 3D geometric model of a user in real time is an advantage for a telecommunication system because depth information is useful for nonverbal communication such as finger-pointing and gesturing that contain 3D information. However, the range image acquired by a rangefinder suffers from errors due to image noises and distortions in depth measurement. On the other hand, a 2D image is free from such errors. In this paper, we propose a new method for a shared space communication system that combines the advantages of both 2D and 3D representations. A user is represented as a 3D geometric model in order to exchange nonverbal communication cues. A background is displayed as a 2D image to give the user adequate information about the environment of the remote site. Additionally, a high-resolution texture taken by a video camera is projected onto the 3D geometric model of the user. This is done because the low resolution of the image acquired by the rangefinder makes it difficult to exchange facial expressions. Furthermore, to fill in the data occluded by the user, old pixel values are used for the user area in the 2D background image. We have constructed a prototype of a high presence shared space communication system based on our method. Through a number of experiments, we have found that our method is more effective for telecommunication than a method with only a 2D or 3D representation.

  • Real-Time Space Carving Using Graphics Hardware

    Christian NITSCHKE  Atsushi NAKAZAWA  Haruo TAKEMURA  

     
    PAPER

      Vol:
    E90-D No:8
      Page(s):
    1175-1184

    Reconstruction of real-world scenes from a set of multiple images is a topic in computer vision and 3D computer graphics with many interesting applications. Attempts have been made to real-time reconstruction on PC cluster systems. While these provide enough performance, they are expensive and less flexible. Approaches that use a GPU hardware-acceleration on single workstations achieve real-time framerates for novel-view synthesis, but do not provide an explicit volumetric representation. This work shows our efforts in developing a GPU hardware-accelerated framework for providing a photo-consistent reconstruction of a dynamic 3D scene. High performance is achieved by employing a shape from silhouette technique in advance. Since the entire processing is done on a single PC, the framework can be applied in mobile environments, enabling a wide range of further applications. We explain our approach using programmable vertex and fragment processors and compare it to highly optimized CPU implementations. We show that the new approach can outperform the latter by more than one magnitude and give an outlook for interesting future enhancements.

  • Real-Time Tracking of Multiple Moving Object Contours in a Moving Camera Image Sequence

    Shoichi ARAKI  Takashi MATSUOKA  Naokazu YOKOYA  Haruo TAKEMURA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:7
      Page(s):
    1583-1591

    This paper describes a new method for detection and tracking of moving objects from a moving camera image sequence using robust estimation and active contour models. We assume that the apparent background motion between two consecutive image frames can be approximated by affine transformation. In order to register the static background, we estimate affine transformation parameters using LMedS (Least Median of Squares) method which is a kind of robust estimator. Split-and-merge contour models are employed for tracking multiple moving objects. Image energy of contour models is defined based on the image which is obtained by subtracting the previous frame transformed with estimated affine parameters from the current frame. We have implemented the method on an image processing system which consists of DSP boards for real-time tracking of moving objects from a moving camera image sequence.