The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Yanjing SUN(2hit)

1-2hit
  • Long-Term Tracking Based on Multi-Feature Adaptive Fusion for Video Target

    Hainan ZHANG  Yanjing SUN  Song LI  Wenjuan SHI  Chenglong FENG  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/02/02
      Vol:
    E101-D No:5
      Page(s):
    1342-1349

    The correlation filter-based trackers with an appearance model established by single feature have poor robustness to challenging video environment which includes factors such as occlusion, fast motion and out-of-view. In this paper, a long-term tracking algorithm based on multi-feature adaptive fusion for video target is presented. We design a robust appearance model by fusing powerful features including histogram of gradient, local binary pattern and color-naming at response map level to conquer the interference in the video. In addition, a random fern classifier is trained as re-detector to detect target when tracking failure occurs, so that long-term tracking is implemented. We evaluate our algorithm on large-scale benchmark datasets and the results show that the proposed algorithm have more accurate and more robust performance in complex video environment.

  • Gray Augmentation Exploration with All-Modality Center-Triplet Loss for Visible-Infrared Person Re-Identification

    Xiaozhou CHENG  Rui LI  Yanjing SUN  Yu ZHOU  Kaiwen DONG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2022/04/06
      Vol:
    E105-D No:7
      Page(s):
    1356-1360

    Visible-Infrared Person Re-identification (VI-ReID) is a challenging pedestrian retrieval task due to the huge modality discrepancy and appearance discrepancy. To address this tough task, this letter proposes a novel gray augmentation exploration (GAE) method to increase the diversity of training data and seek the best ratio of gray augmentation for learning a more focused model. Additionally, we also propose a strong all-modality center-triplet (AMCT) loss to push the features extracted from the same pedestrian more compact but those from different persons more separate. Experiments conducted on the public dataset SYSU-MM01 demonstrate the superiority of the proposed method in the VI-ReID task.