The search functionality is under construction.

Author Search Result

[Author] Kazuyuki TASAKA(2hit)

1-2hit
  • Multiple Human Tracking Using an Omnidirectional Camera with Local Rectification and World Coordinates Representation

    Hitoshi NISHIMURA  Naoya MAKIBUCHI  Kazuyuki TASAKA  Yasutomo KAWANISHI  Hiroshi MURASE  

     
    PAPER

      Pubricized:
    2020/04/10
      Vol:
    E103-D No:6
      Page(s):
    1265-1275

    Multiple human tracking is widely used in various fields such as marketing and surveillance. The typical approach associates human detection results between consecutive frames using the features and bounding boxes (position+size) of detected humans. Some methods use an omnidirectional camera to cover a wider area, but ID switch often occurs in association with detections due to following two factors: i) The feature is adversely affected because the bounding box includes many background regions when a human is captured from an oblique angle. ii) The position and size change dramatically between consecutive frames because the distance metric is non-uniform in an omnidirectional image. In this paper, we propose a novel method that accurately tracks humans with an association metric for omnidirectional images. The proposed method has two key points: i) For feature extraction, we introduce local rectification, which reduces the effect of background regions in the bounding box. ii) For distance calculation, we describe the positions in a world coordinate system where the distance metric is uniform. In the experiments, we confirmed that the Multiple Object Tracking Accuracy (MOTA) improved 3.3 in the LargeRoom dataset and improved 2.3 in the SmallRoom dataset.

  • Local Feature Reliability Measure Consistent with Match Conditions for Mobile Visual Search

    Kohei MATSUZAKI  Kazuyuki TASAKA  Hiromasa YANAGIHARA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/09/12
      Vol:
    E101-D No:12
      Page(s):
    3170-3180

    We propose a feature design method for a mobile visual search based on binary features and a bag-of-visual words framework. In mobile visual search, detection error and quantization error are unavoidable due to viewpoint changes and cause performance degradation. Typical approaches to visual search extract features from a single view of reference images, though such features are insufficient to manage detection and quantization errors. In this paper, we extract features from multiview synthetic images. These features are selected according to our novel reliability measure which enables robust recognition against various viewpoint changes. We regard feature selection as a maximum coverage problem. That is, we find a finite set of features maximizing an objective function under certain constraints. As this problem is NP-hard and thus computationally infeasible, we explore approximate solutions based on a greedy algorithm. For this purpose, we propose novel constraint functions which are designed to be consistent with the match conditions in the visual search method. Experiments show that the proposed method improves retrieval accuracy by 12.7 percentage points without increasing the database size or changing the search procedure. In other words, the proposed method enables more accurate search without adversely affecting the database size, computational cost, and memory requirement.