The search functionality is under construction.

Author Search Result

[Author] Xiaohai HE(4hit)

1-4hit
  • Measuring Collectiveness in Crowded Scenes via Link Prediction

    Jun JIANG  Di WU  Qizhi TENG  Xiaohai HE  Mingliang GAO  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/05/14
      Vol:
    E98-D No:8
      Page(s):
    1617-1620

    Collective motion stems from the coordinated behaviors among individuals of crowds, and has attracted growing interest from the physics and computer vision communities. Collectiveness is a metric of the degree to which the state of crowd motion is ordered or synchronized. In this letter, we present a scheme to measure collectiveness via link prediction. Toward this aim, we propose a similarity index called superposed random walk with restarts (SRWR) and construct a novel collectiveness descriptor using the SRWR index and the Laplacian spectrum of a network. Experiments show that our approach gives promising results in real-world crowd scenes, and performs better than the state-of-the-art methods.

  • Measuring Crowd Collectiveness via Compressive Sensing

    Jun JIANG  Xiaohong WU  Xiaohai HE  Pradeep KARN  

     
    LETTER

      Vol:
    E98-A No:11
      Page(s):
    2263-2266

    Crowd collectiveness, i.e., a quantitative metric for collective motion, has received increasing attention in recent years. Most of existing methods build a collective network by assuming each agent in the crowd interacts with neighbors within fixed radius r region or fixed k nearest neighbors. However, they usually use a universal r or k for different crowded scenes, which may yield inaccurate network topology and lead to lack of adaptivity to varying collective motion scenarios, thereby resulting in poor performance. To overcome these limitations, we propose a compressive sensing (CS) based method for measuring crowd collectiveness. The proposed method uncovers the connections among agents from the motion time series by solving a CS problem, which needs not specify an r or k as a priori. A descriptor based on the average velocity correlations of connected agents is then constructed to compute the collectiveness value. Experimental results demonstrate that the proposed method is effective in measuring crowd collectiveness, and performs on par with or better than the state-of-the-art methods.

  • A Jointly Optimized Predictive-Adaptive Partitioned Block Transform for Video Coding

    Di WU  Xiaohai HE  

     
    PAPER-Image Processing

      Vol:
    E96-A No:11
      Page(s):
    2161-2168

    In this paper, we propose a jointly optimized predictive-adaptive partitioned block transform to exploit the spatial characteristics of intra residuals and improve video coding performance. Under the assumptions of traditional Markov representations, the asymmetric discrete sine transform (ADST) can be combined with a discrete cosine transform (DCT) for video coding. In comparison, the interpolative Markov representation has a lower mean-square error for images or regions that have relatively high contrast, and is insensitive to changes in image statistics. Hence, we derive an even discrete sine transform (EDST) from the interpolative Markov model, and use a coding scheme to switch between EDST and DCT, depending on the prediction direction and boundary information. To obtain an implementation independent of multipliers, we also propose an orthogonal 4-point integer EDST, which consists solely of adds and bit-shifts. We implement our hybrid transform coding scheme within the H.264/AVC intra-mode framework. Experimental results show that the proposed scheme significantly outperforms standard DCT and ADST. It also greatly reduces the blocking artifacts typically observed around block edges, because the new transform is more adaptable to the characteristics of intra-prediction residuals.

  • Scalable Distributed Video Coding for Wireless Video Sensor Networks

    Hong YANG  Linbo QING  Xiaohai HE  Shuhua XIONG  

     
    PAPER

      Pubricized:
    2017/10/16
      Vol:
    E101-D No:1
      Page(s):
    20-27

    Wireless video sensor networks address problems, such as low power consumption of sensor nodes, low computing capacity of nodes, and unstable channel bandwidth. To transmit video of distributed video coding in wireless video sensor networks, we propose an efficient scalable distributed video coding scheme. In this scheme, the scalable Wyner-Ziv frame is based on transmission of different wavelet information, while the Key frame is based on transmission of different residual information. A successive refinement of side information for the Wyner-Ziv and Key frames are proposed in this scheme. Test results show that both the Wyner-Ziv and Key frames have four layers in quality and bit-rate scalable, but no increase in complexity of the encoder.