The search functionality is under construction.

Author Search Result

[Author] Yasushi YAGI(9hit)

1-9hit
  • Controlling the Display of Capsule Endoscopy Video for Diagnostic Assistance

    Hai VU  Tomio ECHIGO  Ryusuke SAGAWA  Keiko YAGI  Masatsugu SHIBA  Kazuhide HIGUCHI  Tetsuo ARAKAWA  Yasushi YAGI  

     
    PAPER-Biological Engineering

      Vol:
    E92-D No:3
      Page(s):
    512-528

    Interpretations by physicians of capsule endoscopy image sequences captured over periods of 7-8 hours usually require 45 to 120 minutes of extreme concentration. This paper describes a novel method to reduce diagnostic time by automatically controlling the display frame rate. Unlike existing techniques, this method displays original images with no skipping of frames. The sequence can be played at a high frame rate in stable regions to save time. Then, in regions with rough changes, the speed is decreased to more conveniently ascertain suspicious findings. To realize such a system, cue information about the disparity of consecutive frames, including color similarity and motion displacements is extracted. A decision tree utilizes these features to classify the states of the image acquisitions. For each classified state, the delay time between frames is calculated by parametric functions. A scheme selecting the optimal parameters set determined from assessments by physicians is deployed. Experiments involved clinical evaluations to investigate the effectiveness of this method compared to a standard-view using an existing system. Results from logged action based analysis show that compared with an existing system the proposed method reduced diagnostic time to around 32.5 7 minutes per full sequence while the number of abnormalities found was similar. As well, physicians needed less effort because of the systems efficient operability. The results of the evaluations should convince physicians that they can safely use this method and obtain reduced diagnostic times.

  • Health Indicator Estimation by Video-Based Gait Analysis

    Ruochen LIAO  Kousuke MORIWAKI  Yasushi MAKIHARA  Daigo MURAMATSU  Noriko TAKEMURA  Yasushi YAGI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2021/07/09
      Vol:
    E104-D No:10
      Page(s):
    1678-1690

    In this study, we propose a method to estimate body composition-related health indicators (e.g., ratio of body fat, body water, and muscle, etc.) using video-based gait analysis. This method is more efficient than individual measurement using a conventional body composition meter. Specifically, we designed a deep-learning framework with a convolutional neural network (CNN), where the input is a gait energy image (GEI) and the output consists of the health indicators. Although a vast amount of training data is typically required to train network parameters, it is unfeasible to collect sufficient ground-truth data, i.e., pairs consisting of the gait video and the health indicators measured using a body composition meter for each subject. We therefore use a two-step approach to exploit an auxiliary gait dataset that contains a large number of subjects but lacks the ground-truth health indicators. At the first step, we pre-train a backbone network using the auxiliary dataset to output gait primitives such as arm swing, stride, the degree of stoop, and the body width — considered to be relevant to the health indicators. At the second step, we add some layers to the backbone network and fine-tune the entire network to output the health indicators even with a limited number of ground-truth data points of the health indicators. Experimental results show that the proposed method outperforms the other methods when training from scratch as well as when using an auto-encoder-based pre-training and fine-tuning approach; it achieves relatively high estimation accuracy for the body composition-related health indicators except for body fat-relevant ones.

  • Pedestrian Detection by Using a Spatio-Temporal Histogram of Oriented Gradients

    Chunsheng HUA  Yasushi MAKIHARA  Yasushi YAGI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E96-D No:6
      Page(s):
    1376-1386

    In this paper, we propose a pedestrian detection algorithm based on both appearance and motion features to achieve high detection accuracy when applied to complex scenes. Here, a pedestrian's appearance is described by a histogram of oriented spatial gradients, and his/her motion is represented by another histogram of temporal gradients computed from successive frames. Since pedestrians typically exhibit not only their human shapes but also unique human movements generated by their arms and legs, the proposed algorithm is particularly powerful in discriminating a pedestrian from a cluttered situation, where some background regions may appear to have human shapes, but their motion differs from human movement. Unlike the algorithm based on a co-occurrence feature descriptor where significant generalization errors may arise owing to the lack of extensive training samples to cover feature variations, the proposed algorithm describes the shape and motion as unique features. These features enable us to train a pedestrian detector in the form of a spatio-temporal histogram of oriented gradients using the AdaBoost algorithm with a relatively small training dataset, while still achieving excellent detection performance. We have confirmed the effectiveness of the proposed algorithm through experiments on several public datasets.

  • Omnidirectional Sensing and Its Applications

    Yasushi YAGI  

     
    INVITED SURVEY PAPER

      Vol:
    E82-D No:3
      Page(s):
    568-579

    The goal of this paper is to present a critical survey of existing literature on an omnidirectional sensing. The area of vision application such as autonomous robot navigation, telepresence and virtual reality is expanding by use of a camera with a wide angle of view. In particular, a real-time omnidirectional camera with a single center of projection is suitable for analyzing and monitoring, because we can easily generate any desired image projected on any designated image plane, such as a pure perspective image or a panoramic image, from the omnidirectional input image. In this paper, I review designs and principles of existing omnidirectional cameras, which can acquire an omnidirectional (360 degrees) field of view, and their applications in fields of autonomous robot navigation, telepresence, remote surveillance and virtual reality.

  • Gait Phase Partitioning and Footprint Detection Using Mutually Constrained Piecewise Linear Approximation with Dynamic Programming

    Makoto YASUKAWA  Yasushi MAKIHARA  Toshinori HOSOI  Masahiro KUBO  Yasushi YAGI  

     
    PAPER-Rehabilitation Engineering and Assistive Technology

      Pubricized:
    2021/08/02
      Vol:
    E104-D No:11
      Page(s):
    1951-1962

    Human gait analysis has been widely used in medical and health fields. It is essential to extract spatio-temporal gait features (e.g., single support duration, step length, and toe angle) by partitioning the gait phase and estimating the footprint position/orientation in such fields. Therefore, we propose a method to partition the gait phase given a foot position sequence using mutually constrained piecewise linear approximation with dynamic programming, which not only represents normal gait well but also pathological gait without training data. We also propose a method to detect footprints by accumulating toe edges on the floor plane during stance phases, which enables us to detect footprints more clearly than a conventional method. Finally, we extract four spatial/temporal gait parameters for accuracy evaluation: single support duration, double support duration, toe angle, and step length. We conducted experiments to validate the proposed method using two types of gait patterns, that is, healthy and mimicked hemiplegic gait, from 10 subjects. We confirmed that the proposed method could estimate the spatial/temporal gait parameters more accurately than a conventional skeleton-based method regardless of the gait pattern.

  • Individuality-Preserving Silhouette Extraction for Gait Recognition and Its Speedup

    Masakazu IWAMURA  Shunsuke MORI  Koichiro NAKAMURA  Takuya TANOUE  Yuzuko UTSUMI  Yasushi MAKIHARA  Daigo MURAMATSU  Koichi KISE  Yasushi YAGI  

     
    PAPER-Pattern Recognition

      Pubricized:
    2021/03/24
      Vol:
    E104-D No:7
      Page(s):
    992-1001

    Most gait recognition approaches rely on silhouette-based representations due to high recognition accuracy and computational efficiency. A fundamental problem for those approaches is how to extract individuality-preserved silhouettes from real scenes accurately. Foreground colors may be similar to background colors, and the background is cluttered. Therefore, we propose a method of individuality-preserving silhouette extraction for gait recognition using standard gait models (SGMs) composed of clean silhouette sequences of various training subjects as shape priors. The SGMs are smoothly introduced into a well-established graph-cut segmentation framework. Experiments showed that the proposed method achieved better silhouette extraction accuracy by more than 2.3% than representative methods and better identification rate of gait recognition (improved by more than 11.0% at rank 20). Besides, to reduce the computation cost, we introduced approximation in the calculation of dynamic programming. As a result, without reducing the segmentation accuracy, we reduced 85.0% of the computational cost.

  • Orientation-Compensative Signal Registration for Owner Authentication Using an Accelerometer

    Trung Thanh NGO  Yasushi MAKIHARA  Hajime NAGAHARA  Yasuhiro MUKAIGAWA  Yasushi YAGI  

     
    PAPER-Pattern Recognition

      Vol:
    E97-D No:3
      Page(s):
    541-553

    Gait-based owner authentication using accelerometers has recently been extensively studied owing to the development of wearable electronic devices. An actual gait signal is always subject to change due to many factors including variation of sensor attachment. In this research, we tackle to the practical sensor-orientation inconsistency, for which signal sequences are captured at different sensor orientations. We present an iterative signal matching algorithm based on phase-registration technique to simultaneously estimate relative sensor-orientation and register the 3D acceleration signals. The iterative framework is initialized by using 1D orientation-invariant resultant signals which are computed from 3D signals. As a result, the matching algorithm is robust to any initial sensor-orientation. This matching algorithm is used to match a probe and a gallery signals in the proposed owner authentication method. Experiments using actual gait signals under various conditions such as different days, sensors, weights being carried, and sensor orientations show that our authentication method achieves positive results.

  • Real-Time Estimation of Fast Egomotion with Feature Classification Using Compound Omnidirectional Vision Sensor

    Trung Thanh NGO  Yuichiro KOJIMA  Hajime NAGAHARA  Ryusuke SAGAWA  Yasuhiro MUKAIGAWA  Masahiko YACHIDA  Yasushi YAGI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E93-D No:1
      Page(s):
    152-166

    For fast egomotion of a camera, computing feature correspondence and motion parameters by global search becomes highly time-consuming. Therefore, the complexity of the estimation needs to be reduced for real-time applications. In this paper, we propose a compound omnidirectional vision sensor and an algorithm for estimating its fast egomotion. The proposed sensor has both multi-baselines and a large field of view (FOV). Our method uses the multi-baseline stereo vision capability to classify feature points as near or far features. After the classification, we can estimate the camera rotation and translation separately by using random sample consensus (RANSAC) to reduce the computational complexity. The large FOV also improves the robustness since the translation and rotation are clearly distinguished. To date, there has been no work on combining multi-baseline stereo with large FOV characteristics for estimation, even though these characteristics are individually are important in improving egomotion estimation. Experiments showed that the proposed method is robust and produces reasonable accuracy in real time for fast motion of the sensor.

  • Guidance of a Mobile Robot with Environmental Map Using Omnidirectional Image Sensor COPIS

    Yasushi YAGI  Yoshimitsu NISHIZAWA  Masahiko YACHIDA  

     
    PAPER

      Vol:
    E76-D No:4
      Page(s):
    486-493

    We have proposed a new omnidirectional image sensor COPIS (COnic Projection Image Sensor) for guiding navigation of a mobile robot. Its feature is passive sensing of the omnidirectional image of the environment in real-time (at the frame rate of a TV camera) using a conic mirror. COPIS is a suitable sensor for visual navigation in real world environment with moving objects. This paper describes a method for estimating the location and the motion of the robot by detecting the azimuth of each object in the omnidirectional image. In this method, the azimuth is matched with the given environmental map. The robot can always estimate its own location and motion precisely because COPIS observes a 360 degree view around the robot even if all edges are not extracted correctly from the omnidirectional image. We also present a method to avoid collision against unknown obstacles and estimate their locations by detecting their azimuth changes while the robot is moving in the environment. Using the COPIS system, we performed several experiments in the real world.