The search functionality is under construction.

Keyword Search Result

[Keyword] saliency detection(10hit)

1-10hit
  • Optic Disc Detection Based on Saliency Detection and Attention Convolutional Neural Networks

    Ying WANG  Xiaosheng YU  Chengdong WU  

     
    LETTER-Image

      Pubricized:
    2021/03/23
      Vol:
    E104-A No:9
      Page(s):
    1370-1374

    The automatic analysis of retinal fundus images is of great significance in large-scale ocular pathologies screening, of which optic disc (OD) location is a prerequisite step. In this paper, we propose a method based on saliency detection and attention convolutional neural network for OD detection. Firstly, the wavelet transform based saliency detection method is used to detect the OD candidate regions to the maximum extent such that the intensity, edge and texture features of the fundus images are all considered into the OD detection process. Then, the attention mechanism that can emphasize the representation of OD region is combined into the dense network. Finally, it is determined whether the detected candidate regions are OD region or non-OD region. The proposed method is implemented on DIARETDB0, DIARETDB1 and MESSIDOR datasets, the experimental results of which demonstrate its superiority and robustness.

  • Efficient Salient Object Detection Model with Dilated Convolutional Networks

    Fei GUO  Yuan YANG  Yong GAO  Ningmei YU  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2020/07/17
      Vol:
    E103-D No:10
      Page(s):
    2199-2207

    Introduction of Fully Convolutional Networks (FCNs) has made record progress in salient object detection models. However, in order to retain the input resolutions, deconvolutional networks with unpooling are applied on top of FCNs. This will cause the increase of the computation and network model size in segmentation task. In addition, most deep learning based methods always discard effective saliency prior knowledge completely, which are shown effective. Therefore, an efficient salient object detection method based on deep learning is proposed in our work. In this model, dilated convolutions are exploited in the networks to produce the output with high resolution without pooling and adding deconvolutional networks. In this way, the parameters and depth of the network are decreased sharply compared with the traditional FCNs. Furthermore, manifold ranking model is explored for the saliency refinement to keep the spatial consistency and contour preserving. Experimental results verify that performance of our method is superior with other state-of-art methods. Meanwhile, the proposed model occupies the less model size and fastest processing speed, which is more suitable for the wearable processing systems.

  • Co-Saliency Detection via Local Prediction and Global Refinement

    Jun WANG  Lei HU  Ning LI  Chang TIAN  Zhaofeng ZHANG  Mingyong ZENG  Zhangkai LUO  Huaping GUAN  

     
    PAPER-Image

      Vol:
    E102-A No:4
      Page(s):
    654-664

    This paper presents a novel model in the field of image co-saliency detection. Previous works simply design low level handcrafted features or extract deep features based on image patches for co-saliency calculation, which neglect the entire object perception properties. Besides, they also neglect the problem of visual similar region's mismatching when designing co-saliency calculation model. To solve these problems, we propose a novel strategy by considering both local prediction and global refinement (LPGR). In the local prediction stage, we train a deep convolutional saliency detection network in an end-to-end manner which only use the fully convolutional layers for saliency map prediction to capture the entire object perception properties and reduce feature redundancy. In the global refinement stage, we construct a unified co-saliency refinement model by integrating global appearance similarity into a co-saliency diffusion function, realizing the propagation and optimization of local saliency values in the context of entire image group. To overcome the adverse effects of visual similar regions' mismatching, we innovatively incorporates the inter-images saliency spread constraint (ISC) term into our co-saliency calculation function. Experimental results on public datasets demonstrate consistent performance gains of the proposed model over the state-of-the-art methods.

  • Automatic and Accurate 3D Measurement Based on RGBD Saliency Detection

    Yibo JIANG  Hui BI  Hui LI  Zhihao XU  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/12/21
      Vol:
    E102-D No:3
      Page(s):
    688-689

    The 3D measurement is widely required in modern industries. In this letter, a method based on the RGBD saliency detection with depth range adjusting (RGBD-DRA) is proposed for 3D measurement. By using superpixels and prior maps, RGBD saliency detection is utilized to detect and measure the target object automatically Meanwhile, the proposed depth range adjusting is processing while measuring to prompt the measuring accuracy further. The experimental results demonstrate the proposed method automatic and accurate, with 3 mm and 3.77% maximum deviation value and rate, respectively.

  • Parallel Feature Network For Saliency Detection

    Zheng FANG  Tieyong CAO  Jibin YANG  Meng SUN  

     
    LETTER-Image

      Vol:
    E102-A No:2
      Page(s):
    480-485

    Saliency detection is widely used in many vision tasks like image retrieval, compression and person re-identification. The deep-learning methods have got great results but most of them focused more on the performance ignored the efficiency of models, which were hard to transplant into other applications. So how to design a efficient model has became the main problem. In this letter, we propose parallel feature network, a saliency model which is built on convolution neural network (CNN) by a parallel method. Parallel dilation blocks are first used to extract features from different layers of CNN, then a parallel upsampling structure is adopted to upsample feature maps. Finally saliency maps are obtained by fusing summations and concatenations of feature maps. Our final model built on VGG-16 is much smaller and faster than existing saliency models and also achieves state-of-the-art performance.

  • Video Saliency Detection Using Spatiotemporal Cues

    Yu CHEN  Jing XIAO  Liuyi HU  Dan CHEN  Zhongyuan WANG  Dengshi LI  

     
    PAPER

      Pubricized:
    2018/06/20
      Vol:
    E101-D No:9
      Page(s):
    2201-2208

    Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.

  • Saliency Detection Based Region Extraction for Pedestrian Detection System with Thermal Imageries

    Ming XU  Xiaosheng YU  Chengdong WU  Dongyue CHEN  

     
    LETTER-Image

      Vol:
    E101-A No:1
      Page(s):
    306-310

    A robust pedestrian detection approach in thermal infrared imageries for an all-day surveillance is proposed. Firstly, the candidate regions which are likely to contain pedestrians are extracted based on a saliency detection method. Then a deep convolutional network with a multi-task loss is constructed to recognize the pedestrians. The experimental results show the superiority of the proposed approach in pedestrian detection.

  • Automatic Optic Disc Boundary Extraction Based on Saliency Object Detection and Modified Local Intensity Clustering Model in Retinal Images

    Wei ZHOU  Chengdong WU  Yuan GAO  Xiaosheng YU  

     
    LETTER-Image

      Vol:
    E100-A No:9
      Page(s):
    2069-2072

    Accurate optic disc localization and segmentation are two main steps when designing automated screening systems for diabetic retinopathy. In this paper, a novel optic disc detection approach based on saliency object detection and modified local intensity clustering model is proposed. It consists of two stages: in the first stage, the saliency detection technique is introduced to the enhanced retinal image with the aim of locating the optic disc. In the second stage, the optic disc boundary is extracted by the modified Local Intensity Clustering (LIC) model with oval-shaped constrain. The performance of our proposed approach is tested on the public DIARETDB1 database. Compared to the state-of-the-art approaches, the experimental results show the advantages and effectiveness of the proposed approach.

  • Co-saliency Detection Linearly Combining Single-View Saliency and Foreground Correspondence

    Huiyun JING  Xin HE  Qi HAN  Xiamu NIU  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2015/01/05
      Vol:
    E98-D No:4
      Page(s):
    985-988

    The research of detecting co-saliency over multiple images is just beginning. The existing methods multiply the saliency on single image by the correspondence over multiple images to estimate co-saliency. They have difficulty in highlighting the co-salient object that is not salient on single image. It is caused by two problems. (1) The correspondence computation lacks precision. (2) The co-saliency multiplication formulation does not fully consider the effect of correspondence for co-saliency. In this paper, we propose a novel co-saliency detection scheme linearly combining foreground correspondence and single-view saliency. The progressive graph matching based foreground correspondence method is proposed to improve the precision of correspondence computation. Then the foreground correspondence is linearly combined with single-view saliency to compute co-saliency. According to the linear combination formulation, high correspondence could bring about high co-saliency, even when single-view saliency is low. Experiments show that our method outperforms previous state-of-the-art co-saliency methods.

  • Learning a Saliency Map for Fixation Prediction

    Linfeng XU  Liaoyuan ZENG  Zhengning WANG  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:10
      Page(s):
    2294-2297

    In this letter, we use the saliency maps obtained by several bottom-up methods to learn a model to generate a bottom-up saliency map. In order to consider top-down image semantics, we use the high-level features of objectness and background probability to learn a top-down saliency map. The bottom-up map and top-down map are combined through a two-layer structure. Quantitative experiments demonstrate that the proposed method and features are effective to predict human fixation.