Previous studies on anomaly detection in videos have trained detectors in which reconstruction and prediction tasks are performed on normal data so that frames on which their task performance is low will be detected as anomalies during testing. This paper proposes a new approach that involves sorting video clips, by using a generative network structure. Our approach learns spatial contexts from appearances and temporal contexts from the order relationship of the frames. Experiments were conducted on four datasets, and we categorized the anomalous sequences by appearance and motion. Evaluations were conducted not only on each total dataset but also on each of the categories. Our method improved detection performance on both anomalies with different appearance and different motion from normality. Moreover, combining our approach with a prediction method produced improvements in precision at a high recall.
Wen SHAO
The University of Tokyo
Rei KAWAKAMI
Tokyo Institute of Technology,Denso IT Laboratory, Inc.
Takeshi NAEMURA
The University of Tokyo
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Wen SHAO, Rei KAWAKAMI, Takeshi NAEMURA, "Anomaly Detection Using Spatio-Temporal Context Learned by Video Clip Sorting" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 5, pp. 1094-1102, May 2022, doi: 10.1587/transinf.2021EDP7207.
Abstract: Previous studies on anomaly detection in videos have trained detectors in which reconstruction and prediction tasks are performed on normal data so that frames on which their task performance is low will be detected as anomalies during testing. This paper proposes a new approach that involves sorting video clips, by using a generative network structure. Our approach learns spatial contexts from appearances and temporal contexts from the order relationship of the frames. Experiments were conducted on four datasets, and we categorized the anomalous sequences by appearance and motion. Evaluations were conducted not only on each total dataset but also on each of the categories. Our method improved detection performance on both anomalies with different appearance and different motion from normality. Moreover, combining our approach with a prediction method produced improvements in precision at a high recall.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7207/_p
Copy
@ARTICLE{e105-d_5_1094,
author={Wen SHAO, Rei KAWAKAMI, Takeshi NAEMURA, },
journal={IEICE TRANSACTIONS on Information},
title={Anomaly Detection Using Spatio-Temporal Context Learned by Video Clip Sorting},
year={2022},
volume={E105-D},
number={5},
pages={1094-1102},
abstract={Previous studies on anomaly detection in videos have trained detectors in which reconstruction and prediction tasks are performed on normal data so that frames on which their task performance is low will be detected as anomalies during testing. This paper proposes a new approach that involves sorting video clips, by using a generative network structure. Our approach learns spatial contexts from appearances and temporal contexts from the order relationship of the frames. Experiments were conducted on four datasets, and we categorized the anomalous sequences by appearance and motion. Evaluations were conducted not only on each total dataset but also on each of the categories. Our method improved detection performance on both anomalies with different appearance and different motion from normality. Moreover, combining our approach with a prediction method produced improvements in precision at a high recall.},
keywords={},
doi={10.1587/transinf.2021EDP7207},
ISSN={1745-1361},
month={May},}
Copy
TY - JOUR
TI - Anomaly Detection Using Spatio-Temporal Context Learned by Video Clip Sorting
T2 - IEICE TRANSACTIONS on Information
SP - 1094
EP - 1102
AU - Wen SHAO
AU - Rei KAWAKAMI
AU - Takeshi NAEMURA
PY - 2022
DO - 10.1587/transinf.2021EDP7207
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2022
AB - Previous studies on anomaly detection in videos have trained detectors in which reconstruction and prediction tasks are performed on normal data so that frames on which their task performance is low will be detected as anomalies during testing. This paper proposes a new approach that involves sorting video clips, by using a generative network structure. Our approach learns spatial contexts from appearances and temporal contexts from the order relationship of the frames. Experiments were conducted on four datasets, and we categorized the anomalous sequences by appearance and motion. Evaluations were conducted not only on each total dataset but also on each of the categories. Our method improved detection performance on both anomalies with different appearance and different motion from normality. Moreover, combining our approach with a prediction method produced improvements in precision at a high recall.
ER -