The search functionality is under construction.

Keyword Search Result

[Keyword] correlation filter(15hit)

1-15hit
  • Spatial-Temporal Regularized Correlation Filter with Precise State Estimation for Visual Tracking

    Zhaoqian TANG  Kaoru ARAKAWA  

     
    PAPER-Digital Signal Processing

      Pubricized:
    2021/12/15
      Vol:
    E105-A No:6
      Page(s):
    914-922

    Recently, the performances of discriminative correlation filter (CF) trackers are getting better and better in visual tracking. In this paper, we propose spatial-temporal regularization with precise state estimation based on discriminative correlation filter (STPSE) in order to achieve more significant tracking performance. First, we consider the continuous change of the object state, using the information from the previous two filters for training the correlation filter model. Here, we train the correlation filter model with the hand-crafted features. Second, we introduce update control in which average peak-to-correlation energy (APCE) and the distance between the object locations obtained by HOG features and hand-crafted features are utilized to detect abnormality of the state around the object. APCE and the distance indicate the reliability of the filter response, thus if abnormality is detected, the proposed method does not update the scale and the object location estimated by the filter response. In the experiment, our tracker (STPSE) achieves significant and real-time performance with only CPU for the challenging benchmark sequence (OTB2013, OTB2015, and TC128).

  • Reinforced Tracker Based on Hierarchical Convolutional Features

    Xin ZENG  Lin ZHANG  Zhongqiang LUO  Xingzhong XIONG  Chengjie LI  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2022/03/10
      Vol:
    E105-D No:6
      Page(s):
    1225-1233

    In recent years, the development of visual tracking is getting better and better, but some methods cannot overcome the problem of low accuracy and success rate of tracking. Although there are some trackers will be more accurate, they will cost more time. In order to solve the problem, we propose a reinforced tracker based on Hierarchical Convolutional Features (HCF for short). HOG, color-naming and grayscale features are used with different weights to supplement the convolution features, which can enhance the tracking robustness. At the same time, we improved the model update strategy to save the time costs. This tracker is called RHCF and the code is published on https://github.com/z15846/RHCF. Experiments on the OTB2013 dataset show that our tracker can validly achieve the promotion of the accuracy and success rate.

  • Correlation Filter-Based Visual Tracking Using Confidence Map and Adaptive Model

    Zhaoqian TANG  Kaoru ARAKAWA  

     
    PAPER-Vision

      Vol:
    E103-A No:12
      Page(s):
    1512-1519

    Recently, visual trackers based on the framework of kernelized correlation filter (KCF) achieve the robustness and accuracy results. These trackers need to learn information on the object from each frame, thus the state change of the object affects the tracking performances. In order to deal with the state change, we propose a novel KCF tracker using the filter response map, namely a confidence map, and adaptive model. This method firstly takes a skipped scale pool method which utilizes variable window size at every two frames. Secondly, the location of the object is estimated using the combination of the filter response and the similarity of the luminance histogram at multiple points in the confidence map. Moreover, we use the re-detection of the multiple peaks of the confidence map to prevent the target drift and reduce the influence of illumination. Thirdly, the learning rate to obtain the model of the object is adjusted, using the filter response and the similarity of the luminance histogram, considering the state of the object. Experimentally, the proposed tracker (CFCA) achieves outstanding performance for the challenging benchmark sequence (OTB2013 and OTB2015).

  • Prediction-Based Scale Adaptive Correlation Filter Tracker

    Zuopeng ZHAO  Hongda ZHANG  Yi LIU  Nana ZHOU  Han ZHENG  Shanyi SUN  Xiaoman LI  Sili XIA  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2019/07/30
      Vol:
    E102-D No:11
      Page(s):
    2267-2271

    Although correlation filter-based trackers have demonstrated excellent performance for visual object tracking, there remain several challenges to be addressed. In this work, we propose a novel tracker based on the correlation filter framework. Traditional trackers face difficulty in accurately adapting to changes in the scale of the target when the target moves quickly. To address this, we suggest a scale adaptive scheme based on prediction scales. We also incorporate a speed-based adaptive model update method to further improve overall tracking performance. Experiments with samples from the OTB100 and KITTI datasets demonstrate that our method outperforms existing state-of-the-art tracking algorithms in fast motion scenes.

  • Real-Time Sparse Visual Tracking Using Circulant Reverse Lasso Model

    Chenggang GUO  Dongyi CHEN  Zhiqi HUANG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/10/09
      Vol:
    E102-D No:1
      Page(s):
    175-184

    Sparse representation has been successfully applied to visual tracking. Recent progresses in sparse tracking are mainly made within the particle filter framework. However, most sparse trackers need to extract complex feature representations for each particle in the limited sample space, leading to expensive computation cost and yielding inferior tracking performance. To deal with the above issues, we propose a novel sparse tracking method based on the circulant reverse lasso model. Benefiting from the properties of circulant matrices, densely sampled target candidates are implicitly generated by cyclically shifting the base feature descriptors, and then embedded into a reverse sparse reconstruction model as a dictionary to encode a robust appearance template. The alternating direction method of multipliers is employed for solving the reverse sparse model and the optimization process can be efficiently solved in the frequency domain, which enables the proposed tracker to run in real-time. The calculated sparse coefficient map represents the similarity scores between the template and circular shifted samples. Thus the target location can be directly predicted according to the coordinates of the peak coefficient. A scale-aware template updating strategy is combined with the correlation filter template learning to take into account both appearance deformations and scale variations. Both quantitative and qualitative evaluations on two challenging tracking benchmarks demonstrate that the proposed algorithm performs favorably against several state-of-the-art sparse representation based tracking methods.

  • Adaptive Object Tracking with Complementary Models

    Peng GAO  Yipeng MA  Chao LI  Ke SONG  Yan ZHANG  Fei WANG  Liyi XIAO  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/08/06
      Vol:
    E101-D No:11
      Page(s):
    2849-2854

    Most state-of-the-art discriminative tracking approaches are based on either template appearance models or statistical appearance models. Despite template appearance models have shown excellent performance, they perform poorly when the target appearance changes rapidly. In contrast, statistic appearance models are insensitive to fast target state changes, but they yield inferior tracking results in challenging scenarios such as illumination variations and background clutters. In this paper, we propose an adaptive object tracking approach with complementary models based on template and statistical appearance models. Both of these models are unified via our novel combination strategy. In addition, we introduce an efficient update scheme to improve the performance of our approach. Experimental results demonstrate that our approach achieves superior performance at speeds that far exceed the frame-rate requirement on recent tracking benchmarks.

  • Accurate Scale Adaptive and Real-Time Visual Tracking with Correlation Filters

    Jiatian PI  Shaohua ZENG  Qing ZUO  Yan WEI  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/07/27
      Vol:
    E101-D No:11
      Page(s):
    2855-2858

    Visual tracking has been studied for several decades but continues to draw significant attention because of its critical role in many applications. This letter handles the problem of fixed template size in Kernelized Correlation Filter (KCF) tracker with no significant decrease in the speed. Extensive experiments are performed on the new OTB dataset.

  • Twofold Correlation Filtering for Tracking Integration

    Wei WANG  Weiguang LI  Zhaoming CHEN  Mingquan SHI  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/07/10
      Vol:
    E101-D No:10
      Page(s):
    2547-2550

    In general, effective integrating the advantages of different trackers can achieve unified performance promotion. In this work, we study the integration of multiple correlation filter (CF) trackers; propose a novel but simple tracking integration method that combines different trackers in filter level. Due to the variety of their correlation filter and features, there is no comparability between different CF tracking results for tracking integration. To tackle this, we propose twofold CF to unify these various response maps so that the results of different tracking algorithms can be compared, so as to boost the tracking performance like ensemble learning. Experiment of two CF methods integration on the data sets OTB demonstrates that the proposed method is effective and promising.

  • Long-Term Tracking Based on Multi-Feature Adaptive Fusion for Video Target

    Hainan ZHANG  Yanjing SUN  Song LI  Wenjuan SHI  Chenglong FENG  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/02/02
      Vol:
    E101-D No:5
      Page(s):
    1342-1349

    The correlation filter-based trackers with an appearance model established by single feature have poor robustness to challenging video environment which includes factors such as occlusion, fast motion and out-of-view. In this paper, a long-term tracking algorithm based on multi-feature adaptive fusion for video target is presented. We design a robust appearance model by fusing powerful features including histogram of gradient, local binary pattern and color-naming at response map level to conquer the interference in the video. In addition, a random fern classifier is trained as re-detector to detect target when tracking failure occurs, so that long-term tracking is implemented. We evaluate our algorithm on large-scale benchmark datasets and the results show that the proposed algorithm have more accurate and more robust performance in complex video environment.

  • Real-Time Object Tracking via Fusion of Global and Local Appearance Models

    Ju Hong YOON  Jungho KIM  Youngbae HWANG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/08/07
      Vol:
    E100-D No:11
      Page(s):
    2738-2743

    In this letter, we propose a robust and fast tracking framework by combining local and global appearance models to cope with partial occlusion and pose variations. The global appearance model is represented by a correlation filter to efficiently estimate the movement of the target and the local appearance model is represented by local feature points to handle partial occlusion and scale variations. Then global and local appearance models are unified via the Bayesian inference in our tracking framework. We experimentally demonstrate the effectiveness of the proposed method in both terms of accuracy and time complexity, which takes 12ms per frame on average for benchmark datasets.

  • Deep Correlation Tracking with Backtracking

    Yulong XU  Yang LI  Jiabao WANG  Zhuang MIAO  Hang LI  Yafei ZHANG  Gang TAO  

     
    LETTER-Vision

      Vol:
    E100-A No:7
      Page(s):
    1601-1605

    Feature extractor is an important component of a tracker and the convolutional neural networks (CNNs) have demonstrated excellent performance in visual tracking. However, the CNN features cannot perform well under conditions of low illumination. To address this issue, we propose a novel deep correlation tracker with backtracking, which consists of target translation, backtracking and scale estimation. We employ four correlation filters, one with a histogram of oriented gradient (HOG) descriptor and the other three with the CNN features to estimate the translation. In particular, we propose a backtracking algorithm to reconfirm the translation location. Comprehensive experiments are performed on a large-scale challenging benchmark dataset. And the results show that the proposed algorithm outperforms state-of-the-art methods in accuracy and robustness.

  • Feature Adaptive Correlation Tracking

    Yulong XU  Yang LI  Jiabao WANG  Zhuang MIAO  Hang LI  Yafei ZHANG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2016/11/28
      Vol:
    E100-D No:3
      Page(s):
    594-597

    Feature extractor plays an important role in visual tracking, but most state-of-the-art methods employ the same feature representation in all scenes. Taking into account the diverseness, a tracker should choose different features according to the videos. In this work, we propose a novel feature adaptive correlation tracker, which decomposes the tracking task into translation and scale estimation. According to the luminance of the target, our approach automatically selects either hierarchical convolutional features or histogram of oriented gradient features in translation for varied scenarios. Furthermore, we employ a discriminative correlation filter to handle scale variations. Extensive experiments are performed on a large-scale benchmark challenging dataset. And the results show that the proposed algorithm outperforms state-of-the-art trackers in accuracy and robustness.

  • Combining Color Features for Real-Time Correlation Tracking

    Yulong XU  Zhuang MIAO  Jiabao WANG  Yang LI  Hang LI  Yafei ZHANG  Weiguang XU  Zhisong PAN  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2016/10/04
      Vol:
    E100-D No:1
      Page(s):
    225-228

    Correlation filter-based approaches achieve competitive results in visual tracking, but the traditional correlation tracking methods failed in mining the color information of the videos. To address this issue, we propose a novel tracker combined with color features in a correlation filter framework, which extracts not only gray but also color information as the feature maps to compute the maximum response location via multi-channel correlation filters. In particular, we modify the label function of the conventional classifier to improve positioning accuracy and employ a discriminative correlation filter to handle scale variations. Experiments are performed on 35 challenging benchmark color sequences. And the results clearly show that our method outperforms state-of-the-art tracking approaches while operating in real-time.

  • Robust Scale Adaptive and Real-Time Visual Tracking with Correlation Filters

    Jiatian PI  Keli HU  Yuzhang GU  Lei QU  Fengrong LI  Xiaolin ZHANG  Yunlong ZHAN  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/04/07
      Vol:
    E99-D No:7
      Page(s):
    1895-1902

    Visual tracking has been studied for several decades but continues to draw significant attention because of its critical role in many applications. Recent years have seen greater interest in the use of correlation filters in visual tracking systems, owing to their extremely compelling results in different competitions and benchmarks. However, there is still a need to improve the overall tracking capability to counter various tracking issues, including large scale variation, occlusion, and deformation. This paper presents an appealing tracker with robust scale estimation, which can handle the problem of fixed template size in Kernelized Correlation Filter (KCF) tracker with no significant decrease in the speed. We apply the discriminative correlation filter for scale estimation as an independent part after finding the optimal translation based on the KCF tracker. Compared to an exhaustive scale space search scheme, our approach provides improved performance while being computationally efficient. In order to reveal the effectiveness of our approach, we use benchmark sequences annotated with 11 attributes to evaluate how well the tracker handles different attributes. Numerous experiments demonstrate that the proposed algorithm performs favorably against several state-of-the-art algorithms. Appealing results both in accuracy and robustness are also achieved on all 51 benchmark sequences, which proves the efficiency of our tracker.

  • Rotation-Tolerant Camera Identification Using Optimal Tradeoff Circular Harmonic Function Correlation Filter

    Dai-Kyung HYUN  Dae-Jin JUNG  Hae-Yeoun LEE  Heung-Kyu LEE  

     
    LETTER-Information Network

      Vol:
    E96-D No:6
      Page(s):
    1394-1397

    In this paper, we propose a novel camera identification method based on photo-response non-uniformity (PRNU), which performs well even with rotated videos. One of the disadvantages of the PRNU-based camera identification methods is that they are very sensitive to de-synchronization. If a video under investigation is slightly rotated, the identification process without synchronization fails. The proposed method solves this kind of out-of-sync problem, by achieving rotation-tolerance using Optimal Tradeoff Circular Harmonic Function (OTCHF) correlation filter. The experimental results show that the proposed method identifies source device with high accuracy from rotated videos.