The search functionality is under construction.

Author Search Result

[Author] Yasutomo KAWANISHI(6hit)

1-6hit
  • Multiple Human Tracking Using an Omnidirectional Camera with Local Rectification and World Coordinates Representation

    Hitoshi NISHIMURA  Naoya MAKIBUCHI  Kazuyuki TASAKA  Yasutomo KAWANISHI  Hiroshi MURASE  

     
    PAPER

      Pubricized:
    2020/04/10
      Vol:
    E103-D No:6
      Page(s):
    1265-1275

    Multiple human tracking is widely used in various fields such as marketing and surveillance. The typical approach associates human detection results between consecutive frames using the features and bounding boxes (position+size) of detected humans. Some methods use an omnidirectional camera to cover a wider area, but ID switch often occurs in association with detections due to following two factors: i) The feature is adversely affected because the bounding box includes many background regions when a human is captured from an oblique angle. ii) The position and size change dramatically between consecutive frames because the distance metric is non-uniform in an omnidirectional image. In this paper, we propose a novel method that accurately tracks humans with an association metric for omnidirectional images. The proposed method has two key points: i) For feature extraction, we introduce local rectification, which reduces the effect of background regions in the bounding box. ii) For distance calculation, we describe the positions in a world coordinate system where the distance metric is uniform. In the experiments, we confirmed that the Multiple Object Tracking Accuracy (MOTA) improved 3.3 in the LargeRoom dataset and improved 2.3 in the SmallRoom dataset.

  • Attribute-Aware Loss Function for Accurate Semantic Segmentation Considering the Pedestrian Orientations Open Access

    Mahmud Dwi SULISTIYO  Yasutomo KAWANISHI  Daisuke DEGUCHI  Ichiro IDE  Takatsugu HIRAYAMA  Jiang-Yu ZHENG  Hiroshi MURASE  

     
    PAPER

      Vol:
    E103-A No:1
      Page(s):
    231-242

    Numerous applications such as autonomous driving, satellite imagery sensing, and biomedical imaging use computer vision as an important tool for perception tasks. For Intelligent Transportation Systems (ITS), it is required to precisely recognize and locate scenes in sensor data. Semantic segmentation is one of computer vision methods intended to perform such tasks. However, the existing semantic segmentation tasks label each pixel with a single object's class. Recognizing object attributes, e.g., pedestrian orientation, will be more informative and help for a better scene understanding. Thus, we propose a method to perform semantic segmentation with pedestrian attribute recognition simultaneously. We introduce an attribute-aware loss function that can be applied to an arbitrary base model. Furthermore, a re-annotation to the existing Cityscapes dataset enriches the ground-truth labels by annotating the attributes of pedestrian orientation. We implement the proposed method and compare the experimental results with others. The attribute-aware semantic segmentation shows the ability to outperform baseline methods both in the traditional object segmentation task and the expanded attribute detection task.

  • Estimation of the Attractiveness of Food Photography Based on Image Features

    Kazuma TAKAHASHI  Tatsumi HATTORI  Keisuke DOMAN  Yasutomo KAWANISHI  Takatsugu HIRAYAMA  Ichiro IDE  Daisuke DEGUCHI  Hiroshi MURASE  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2019/05/07
      Vol:
    E102-D No:8
      Page(s):
    1590-1593

    We introduce a method to estimate the attractiveness of a food photo. It extracts image features focusing on the appearances of 1) the entire food, and 2) the main ingredients. To estimate the attractiveness of an arbitrary food photo, these features are integrated in a regression scheme. We also constructed and released a food image dataset composed of images of ten food categories taken from 36 angles and accompanied with attractiveness values. Evaluation results showed the effectiveness of integrating the two kinds of image features.

  • Human Wearable Attribute Recognition Using Probability-Map-Based Decomposition of Thermal Infrared Images

    Brahmastro KRESNARAMAN  Yasutomo KAWANISHI  Daisuke DEGUCHI  Tomokazu TAKAHASHI  Yoshito MEKADA  Ichiro IDE  Hiroshi MURASE  

     
    PAPER-Image

      Vol:
    E100-A No:3
      Page(s):
    854-864

    This paper addresses the attribute recognition problem, a field of research that is dominated by studies in the visible spectrum. Only a few works are available in the thermal spectrum, which is fundamentally different from the visible one. This research performs recognition specifically on wearable attributes, such as glasses and masks. Usually these attributes are relatively small in size when compared with the human body, on top of a large intra-class variation of the human body itself, therefore recognizing them is not an easy task. Our method utilizes a decomposition framework based on Robust Principal Component Analysis (RPCA) to extract the attribute information for recognition. However, because it is difficult to separate the body and the attributes without any prior knowledge, noise is also extracted along with attributes, hampering the recognition capability. We made use of prior knowledge; namely the location where the attribute is likely to be present. The knowledge is referred to as the Probability Map, incorporated as a weight in the decomposition by RPCA. Using the Probability Map, we achieve an attribute-wise decomposition. The results show a significant improvement with this approach compared to the baseline, and the proposed method achieved the highest performance in average with a 0.83 F-score.

  • SDOF-Tracker: Fast and Accurate Multiple Human Tracking by Skipped-Detection and Optical-Flow

    Hitoshi NISHIMURA  Satoshi KOMORITA  Yasutomo KAWANISHI  Hiroshi MURASE  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2022/08/01
      Vol:
    E105-D No:11
      Page(s):
    1938-1946

    Multiple human tracking is a fundamental problem in understanding the context of a visual scene. Although both accuracy and speed are required in real-world applications, recent tracking methods based on deep learning focus on accuracy and require a substantial amount of running time. We aim to improve tracking running speeds by performing human detections at certain frame intervals because it accounts for most of the running time. The question is how to maintain accuracy while skipping human detection. In this paper, we propose a method that interpolates the detection results by using an optical flow, which is based on the fact that someone's appearance does not change much between adjacent frames. To maintain the tracking accuracy, we introduce robust interest point detection within the human regions and a tracking termination metric defined by the distribution of the interest points. On the MOT17 and MOT20 datasets in the MOTChallenge, the proposed SDOF-Tracker achieved the best performance in terms of total running time while maintaining the MOTA metric. Our code is available at https://github.com/hitottiez/sdof-tracker.

  • Pedestrian Detectability Estimation Considering Visual Adaptation to Drastic Illumination Change

    Yuki IMAEDA  Takatsugu HIRAYAMA  Yasutomo KAWANISHI  Daisuke DEGUCHI  Ichiro IDE  Hiroshi MURASE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/02/20
      Vol:
    E101-D No:5
      Page(s):
    1457-1461

    We propose an estimation method of pedestrian detectability considering the driver's visual adaptation to drastic illumination change, which has not been studied in previous works. We assume that driver's visual characteristics change in proportion to the elapsed time after illumination change. In this paper, as a solution, we construct multiple estimators corresponding to different elapsed periods, and estimate the detectability by switching them according to the elapsed period. To evaluate the proposed method, we construct an experimental setup to present a participant with illumination changes and conduct a preliminary simulated experiment to measure and estimate the pedestrian detectability according to the elapsed period. Results show that the proposed method can actually estimate the detectability accurately after a drastic illumination change.