The search functionality is under construction.

Keyword Search Result

[Keyword] low-rank representation(2hit)

1-2hit
  • Action Recognition Using Low-Rank Sparse Representation

    Shilei CHENG  Song GU  Maoquan YE  Mei XIE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/11/24
      Vol:
    E101-D No:3
      Page(s):
    830-834

    Human action recognition in videos draws huge research interests in computer vision. The Bag-of-Word model is quite commonly used to obtain the video level representations, however, BoW model roughly assigns each feature vector to its nearest visual word and the collection of unordered words ignores the interest points' spatial information, inevitably causing nontrivial quantization errors and impairing improvements on classification rates. To address these drawbacks, we propose an approach for action recognition by encoding spatio-temporal log Euclidean covariance matrix (ST-LECM) features within the low-rank and sparse representation framework. Motivated by low rank matrix recovery, local descriptors in a spatial temporal neighborhood have similar representation and should be approximately low rank. The learned coefficients can not only capture the global data structures, but also preserve consistent. Experimental results showed that the proposed approach yields excellent recognition performance on synthetic video datasets and are robust to action variability, view variations and partial occlusion.

  • Low-Rank Representation with Graph Constraints for Robust Visual Tracking

    Jieyan LIU  Ao MA  Jingjing LI  Ke LU  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/03/08
      Vol:
    E100-D No:6
      Page(s):
    1325-1338

    Subspace representation model is an important subset of visual tracking algorithms. Compared with models performed on the original data space, subspace representation model can effectively reduce the computational complexity, and filter out high dimensional noises. However, for some complicated situations, e.g., dramatic illumination changing, large area of occlusion and abrupt object drifting, traditional subspace representation models may fail to handle the visual tracking task. In this paper, we propose a novel subspace representation algorithm for robust visual tracking by using low-rank representation with graph constraints (LRGC). Low-rank representation has been well-known for its superiority of handling corrupted samples, and graph constraint is flexible to characterize sample relationship. In this paper, we aim to exploit benefits from both low-rank representation and graph constraint, and deploy it to handle challenging visual tracking problems. Specifically, we first propose a novel graph structure to characterize the relationship of target object in different observation states. Then we learn a subspace by jointly optimizing low-rank representation and graph embedding in a unified framework. Finally, the learned subspace is embedded into a Bayesian inference framework by using the dynamical model and the observation model. Experiments on several video benchmarks demonstrate that our algorithm performs better than traditional ones, especially in dynamically changing and drifting situations.