The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] hierarchical search(3hit)

1-3hit
  • Drift-Free Tracking Surveillance Based on Online Latent Structured SVM and Kalman Filter Modules

    Yung-Yao CHEN  Yi-Cheng ZHANG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/11/14
      Vol:
    E101-D No:2
      Page(s):
    491-503

    Tracking-by-detection methods consider tracking task as a continuous detection problem applied over video frames. Modern tracking-by-detection trackers have online learning ability; the update stage is essential because it determines how to modify the classifier inherent in a tracker. However, most trackers search for the target within a fixed region centered at the previous object position; thus, they lack spatiotemporal consistency. This becomes a problem when the tracker detects an incorrect object during short-term occlusion. In addition, the scale of the bounding box that contains the target object is usually assumed not to change. This assumption is unrealistic for long-term tracking, where the scale of the target varies as the distance between the target and the camera changes. The accumulation of errors resulting from these shortcomings results in the drift problem, i.e. drifting away from the target object. To resolve this problem, we present a drift-free, online learning-based tracking-by-detection method using a single static camera. We improve the latent structured support vector machine (SVM) tracker by designing a more robust tracker update step by incorporating two Kalman filter modules: the first is used to predict an adaptive search region in consideration of the object motion; the second is used to adjust the scale of the bounding box by accounting for the background model. We propose a hierarchical search strategy that combines Bhattacharyya coefficient similarity analysis and Kalman predictors. This strategy facilitates overcoming occlusion and increases tracking efficiency. We evaluate this work using publicly available videos thoroughly. Experimental results show that the proposed method outperforms the state-of-the-art trackers.

  • Retrieval of Images Captured by Car Cameras Using Its Front and Side Views and GPS Data

    Toshihiko YAMASAKI  Takayuki ISHIKAWA  Kiyoharu AIZAWA  

     
    PAPER

      Vol:
    E90-D No:1
      Page(s):
    217-223

    Recently, cars are equipped with a lot of sensors for safety driving. We have been trying to store the driving-scene video with such sensor data and to detect the change of scenery of streets. Detection results can be used for building historical database of town scenery, automatic landmark updating of maps, and so forth. In order to compare images to detect changes, image retrieval taken at nearly identical locations is required as the first step. Since Global Positioning System (GPS) data essentially contain some noises, we cannot rely only on GPS data for our image retrieval. Therefore, we have developed an image retrieval algorithm employing edge-histogram-based image features in conjunction with hierarchical search. By using edge histograms projected onto the vertical and horizontal axes, the retrieval has been made robust to image variation due to weather change, clouds, obstacles, and so on. In addition, matching cost has been made small by limiting the matching candidates employing the hierarchical search. Experimental results have demonstrated that the mean retrieval accuracy has been improved from 65% to 76% for the front-view images and from 34% to 53% for the side-view images.

  • A VLSI Architecture for Motion Estimation Core Dedicated to H. 263 Video Coding

    Gen FUJITA  Takao ONOYE  Isao SHIRAKAWA  

     
    PAPER

      Vol:
    E81-C No:5
      Page(s):
    702-707

    A VLSI architecture of a motion estimator is described dedicatedly for the H. 263 low bitrate video coding. Adopting an efficient hierarchical search algorithm, a new motion estimator yields high quality vectors with small area occupancy and at a low operation frequency. A one-dimensional PE (Processing Element) array is devised to be tuned to the H. 263 encoding, which treats both the advanced prediction mode and the PB-frame mode. The proposed motion estimation core is integrated in 1. 55 mm2 by using 0. 35 µm CMOS 3LM technology, which operates at 15 MHz, and hence enables the realtime motion estimation of QCIF pictures.