This paper presents a framework for automatic video region-of-interest determination based on visual attention model. We view this work as a preliminary step towards the solution of high-level semantic video analysis. Facing such a challenging issue, in this work, a set of attempts on using video attention features and knowledge of computational media aesthetics are made. The three types of visual attention features we used are intensity, color, and motion. Referring to aesthetic principles, these features are combined according to camera motion types on the basis of a new proposed video analysis unit, frame-segment. We conduct subjective experiments on several kinds of video data and demonstrate the effectiveness of the proposed framework.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Wen-Huang CHENG, Wei-Ta CHU, Ja-Ling WU, "A Visual Attention Based Region-of-Interest Determination Framework for Video Sequences" in IEICE TRANSACTIONS on Information,
vol. E88-D, no. 7, pp. 1578-1586, July 2005, doi: 10.1093/ietisy/e88-d.7.1578.
Abstract: This paper presents a framework for automatic video region-of-interest determination based on visual attention model. We view this work as a preliminary step towards the solution of high-level semantic video analysis. Facing such a challenging issue, in this work, a set of attempts on using video attention features and knowledge of computational media aesthetics are made. The three types of visual attention features we used are intensity, color, and motion. Referring to aesthetic principles, these features are combined according to camera motion types on the basis of a new proposed video analysis unit, frame-segment. We conduct subjective experiments on several kinds of video data and demonstrate the effectiveness of the proposed framework.
URL: https://global.ieice.org/en_transactions/information/10.1093/ietisy/e88-d.7.1578/_p
Copy
@ARTICLE{e88-d_7_1578,
author={Wen-Huang CHENG, Wei-Ta CHU, Ja-Ling WU, },
journal={IEICE TRANSACTIONS on Information},
title={A Visual Attention Based Region-of-Interest Determination Framework for Video Sequences},
year={2005},
volume={E88-D},
number={7},
pages={1578-1586},
abstract={This paper presents a framework for automatic video region-of-interest determination based on visual attention model. We view this work as a preliminary step towards the solution of high-level semantic video analysis. Facing such a challenging issue, in this work, a set of attempts on using video attention features and knowledge of computational media aesthetics are made. The three types of visual attention features we used are intensity, color, and motion. Referring to aesthetic principles, these features are combined according to camera motion types on the basis of a new proposed video analysis unit, frame-segment. We conduct subjective experiments on several kinds of video data and demonstrate the effectiveness of the proposed framework.},
keywords={},
doi={10.1093/ietisy/e88-d.7.1578},
ISSN={},
month={July},}
Copy
TY - JOUR
TI - A Visual Attention Based Region-of-Interest Determination Framework for Video Sequences
T2 - IEICE TRANSACTIONS on Information
SP - 1578
EP - 1586
AU - Wen-Huang CHENG
AU - Wei-Ta CHU
AU - Ja-Ling WU
PY - 2005
DO - 10.1093/ietisy/e88-d.7.1578
JO - IEICE TRANSACTIONS on Information
SN -
VL - E88-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2005
AB - This paper presents a framework for automatic video region-of-interest determination based on visual attention model. We view this work as a preliminary step towards the solution of high-level semantic video analysis. Facing such a challenging issue, in this work, a set of attempts on using video attention features and knowledge of computational media aesthetics are made. The three types of visual attention features we used are intensity, color, and motion. Referring to aesthetic principles, these features are combined according to camera motion types on the basis of a new proposed video analysis unit, frame-segment. We conduct subjective experiments on several kinds of video data and demonstrate the effectiveness of the proposed framework.
ER -