Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.
Yu CHEN
Wuhan University
Jing XIAO
Wuhan University
Liuyi HU
Wuhan University
Dan CHEN
Wuhan University
Zhongyuan WANG
Wuhan University
Dengshi LI
Jianghan University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Yu CHEN, Jing XIAO, Liuyi HU, Dan CHEN, Zhongyuan WANG, Dengshi LI, "Video Saliency Detection Using Spatiotemporal Cues" in IEICE TRANSACTIONS on Information,
vol. E101-D, no. 9, pp. 2201-2208, September 2018, doi: 10.1587/transinf.2017PCP0011.
Abstract: Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2017PCP0011/_p
Copy
@ARTICLE{e101-d_9_2201,
author={Yu CHEN, Jing XIAO, Liuyi HU, Dan CHEN, Zhongyuan WANG, Dengshi LI, },
journal={IEICE TRANSACTIONS on Information},
title={Video Saliency Detection Using Spatiotemporal Cues},
year={2018},
volume={E101-D},
number={9},
pages={2201-2208},
abstract={Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.},
keywords={},
doi={10.1587/transinf.2017PCP0011},
ISSN={1745-1361},
month={September},}
Copy
TY - JOUR
TI - Video Saliency Detection Using Spatiotemporal Cues
T2 - IEICE TRANSACTIONS on Information
SP - 2201
EP - 2208
AU - Yu CHEN
AU - Jing XIAO
AU - Liuyi HU
AU - Dan CHEN
AU - Zhongyuan WANG
AU - Dengshi LI
PY - 2018
DO - 10.1587/transinf.2017PCP0011
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E101-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2018
AB - Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.
ER -