The search functionality is under construction.

Author Search Result

[Author] Xin HE(6hit)

1-6hit
  • Region Diversity Based Saliency Density Maximization for Salient Object Detection

    Xin HE  Huiyun JING  Qi HAN  Xiamu NIU  

     
    LETTER-Image

      Vol:
    E96-A No:1
      Page(s):
    394-397

    Existing salient object detection methods either simply use a threshold to detect desired salient objects from saliency map or search the most promising rectangular window covering salient objects on the saliency map. There are two problems in the existing methods: 1) The performance of threshold-dependent methods depends on a threshold selection and it is difficult to select an appropriate threshold value. 2) The rectangular window not only covers the salient object but also contains background pixels, which leads to imprecise salient object detection. For solving these problems, a novel saliency threshold-free method for detecting the salient object with a well-defined boundary is proposed in this paper. We propose a novel window search algorithm to locate a rectangular window on our saliency map, which contains as many as possible pixels belonging the salient object and as few as possible background pixels. Once the window is determined, GrabCut is applied to extract salient object with a well-defined boundary. Compared with existing methods, our approach doesn't need any threshold to binarize the saliency map and additional operations. Experimental results show that our approach outperforms 4 state-of-the-art salient object detection methods, yielding higher precision and better F-Measure.

  • Co-saliency Detection Linearly Combining Single-View Saliency and Foreground Correspondence

    Huiyun JING  Xin HE  Qi HAN  Xiamu NIU  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/01/05
      Vol:
    E98-D No:4
      Page(s):
    985-988

    The research of detecting co-saliency over multiple images is just beginning. The existing methods multiply the saliency on single image by the correspondence over multiple images to estimate co-saliency. They have difficulty in highlighting the co-salient object that is not salient on single image. It is caused by two problems. (1) The correspondence computation lacks precision. (2) The co-saliency multiplication formulation does not fully consider the effect of correspondence for co-saliency. In this paper, we propose a novel co-saliency detection scheme linearly combining foreground correspondence and single-view saliency. The progressive graph matching based foreground correspondence method is proposed to improve the precision of correspondence computation. Then the foreground correspondence is linearly combined with single-view saliency to compute co-saliency. According to the linear combination formulation, high correspondence could bring about high co-saliency, even when single-view saliency is low. Experiments show that our method outperforms previous state-of-the-art co-saliency methods.

  • A Novel Bayes' Theorem-Based Saliency Detection Model

    Xin HE  Huiyun JING  Qi HAN  Xiamu NIU  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E94-D No:12
      Page(s):
    2545-2548

    We propose a novel saliency detection model based on Bayes' theorem. The model integrates the two parts of Bayes' equation to measure saliency, each part of which was considered separately in the previous models. The proposed model measures saliency by computing local kernel density estimation of features in the center-surround region and global kernel density estimation of features at each pixel across the whole image. Under the proposed model, a saliency detection method is presented that extracts DCT (Discrete Cosine Transform) magnitude of local region around each pixel as the feature. Experiments show that the proposed model not only performs competitively on psychological patterns and better than the current state-of-the-art models on human visual fixation data, but also is robust against signal uncertainty.

  • CBRISK: Colored Binary Robust Invariant Scalable Keypoints

    Huiyun JING  Xin HE  Qi HAN  Xiamu NIU  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:2
      Page(s):
    392-395

    BRISK (Binary Robust Invariant Scalable Keypoints) works dramatically faster than well-established algorithms (SIFT and SURF) while maintaining matching performance. However BRISK relies on intensity, color information in the image is ignored. In view of the importance of color information in vision applications, we propose CBRISK, a novel method for taking into account color information during keypoint detection and description. Instead of grayscale intensity image, the proposed approach detects keypoints in the photometric invariant color space. On the basis of binary intensity BRISK (original BRISK) descriptor, the proposed approach embeds binary invariant color presentation in the CBRISK descriptors. Experimental results show that CBRISK is more discriminative and robust than BRISK with respect to photometric variation.

  • GREAT-CEO: larGe scale distRibuted dEcision mAking Techniques for Wireless Chief Executive Officer Problems Open Access

    Xiaobo ZHOU  Xin HE  Khoirul ANWAR  Tad MATSUMOTO  

     
    INVITED PAPER

      Vol:
    E95-B No:12
      Page(s):
    3654-3662

    In this paper, we reformulate the issue related to wireless mesh networks (WMNs) from the Chief Executive Officer (CEO) problem viewpoint, and provide a practical solution to a simple case of the problem. It is well known that the CEO problem is a theoretical basis for sensor networks. The problem investigated in this paper is described as follows: an originator broadcasts its binary information sequence to several forwarding nodes (relays) over Binary Symmetric Channels (BSC); the originator's information sequence suffers from independent random binary errors; at the forwarding nodes, they just further interleave, encode the received bit sequence, and then forward it, without making heavy efforts for correcting errors that may occur in the originator-relay links, to the final destination (FD) over Additive White Gaussian Noise (AWGN) channels. Hence, this strategy reduces the complexity of the relay significantly. A joint iterative decoding technique at the FD is proposed by utilizing the knowledge of the correlation due to the errors occurring in the link between the originator and forwarding nodes (referred to as intra-link). The bit-error-rate (BER) performances show that the originator's information can be reconstructed at the FD even by using a very simple coding scheme. We provide BER performance comparison between joint decoding and separate decoding strategies. The simulation results show that excellent performance can be achieved by the proposed system. Furthermore, extrinsic information transfer (EXIT) chart analysis is performed to investigate convergence property of the proposed technique, with the aim of, in part, optimizing the code rate at the originator.

  • Saliency Density and Edge Response Based Salient Object Detection

    Huiyun JING  Qi HAN  Xin HE  Xiamu NIU  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:5
      Page(s):
    1243-1246

    We propose a novel threshold-free salient object detection approach which integrates both saliency density and edge response. The salient object with a well-defined boundary can be automatically detected by our approach. Saliency density and edge response maximization is used as the quality function to direct the salient object discovery. The global optimal window containing a salient object is efficiently located through the proposed saliency density and edge response based branch-and-bound search. To extract the salient object with a well-defined boundary, the GrabCut method is applied, initialized by the located window. Experimental results show that our approach outperforms the methods only using saliency or edge response and achieves a comparable performance with the best state-of-the-art method, while being without any threshold or multiple iterations of GrabCut.