The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Changhong CHEN(2hit)

1-2hit
  • Collective Activity Recognition by Attribute-Based Spatio-Temporal Descriptor

    Changhong CHEN  Hehe DOU  Zongliang GAN  

     
    LETTER-Pattern Recognition

      Pubricized:
    2015/07/22
      Vol:
    E98-D No:10
      Page(s):
    1875-1878

    Collective activity recognition plays an important role in high-level video analysis. Most current feature representations look at contextual information extracted from the behaviour of nearby people. Every person needs to be detected and his pose should be estimated. After extracting the feature, hierarchical graphical models are always employed to model the spatio-temporal patterns of individuals and their interactions, and so can not avoid complex preprocessing and inference operations. To overcome these drawbacks, we present a new feature representation method, called attribute-based spatio-temporal (AST) descriptor. First, two types of information, spatio-temporal (ST) features and attribute features, are exploited. Attribute-based features are manually specified. An attribute classifier is trained to model the relationship between the ST features and attribute-based features, according to which the attribute features are refreshed. Then, the ST features, attribute features and the relationship between the attributes are combined to form the AST descriptor. An objective classifier can be specified on the AST descriptor and the weight parameters of the classifier are used for recognition. Experiments on standard collective activity benchmark sets show the effectiveness of the proposed descriptor.

  • Topic-Based Knowledge Transfer Algorithm for Cross-View Action Recognition

    Changhong CHEN  Shunqing YANG  Zongliang GAN  

     
    LETTER-Pattern Recognition

      Vol:
    E97-D No:3
      Page(s):
    614-617

    Cross-view action recognition is a challenging research field for human motion analysis. Appearance-based features are not credible if the viewpoint changes. In this paper, a new framework is proposed for cross-view action recognition by topic based knowledge transfer. First, Spatio-temporal descriptors are extracted from the action videos and each video is modeled by a bag of visual words (BoVW) based on the codebook constructed by the k-means cluster algorithm. Second, Latent Dirichlet Allocation (LDA) is employed to assign topics for the BoVW representation. The topic distribution of visual words (ToVW) is normalized and taken to be the feature vector. Third, in order to bridge different views, we transform ToVW into bilingual ToVW by constructing bilingual dictionaries, which guarantee that the same action has the same representation from different views. We demonstrate the effectiveness of the proposed algorithm on the IXMAS multi-view dataset.