Video-based action recognition encompasses the recognition of appearance and the classification of action types. This work proposes a discrete-temporal-sequence-based motion tendency clustering framework to implement motion clustering by extracting motion tendencies and self-supervised learning. A published traffic intersection dataset (inD) and a self-produced gesture video set are used for evaluation and to validate the motion tendency action recognition hypothesis.
Xingyu QIAN
Chinese Acad Sci, Shanghai Institute of Microsystem and Information Technology,Shanghaitech University
Xiaogang CHEN
Chinese Acad Sci, Shanghai Institute of Microsystem and Information Technology
Aximu YUEMAIER
Chinese Acad Sci, Shanghai Institute of Microsystem and Information Technology,Shanghaitech University
Shunfen LI
Chinese Acad Sci, Shanghai Institute of Microsystem and Information Technology
Weibang DAI
Chinese Acad Sci, Shanghai Institute of Microsystem and Information Technology
Zhitang SONG
Chinese Acad Sci, Shanghai Institute of Microsystem and Information Technology
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Xingyu QIAN, Xiaogang CHEN, Aximu YUEMAIER, Shunfen LI, Weibang DAI, Zhitang SONG, "Temporal-Based Action Clustering for Motion Tendencies" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 8, pp. 1292-1295, August 2023, doi: 10.1587/transinf.2023EDL8001.
Abstract: Video-based action recognition encompasses the recognition of appearance and the classification of action types. This work proposes a discrete-temporal-sequence-based motion tendency clustering framework to implement motion clustering by extracting motion tendencies and self-supervised learning. A published traffic intersection dataset (inD) and a self-produced gesture video set are used for evaluation and to validate the motion tendency action recognition hypothesis.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2023EDL8001/_p
Copy
@ARTICLE{e106-d_8_1292,
author={Xingyu QIAN, Xiaogang CHEN, Aximu YUEMAIER, Shunfen LI, Weibang DAI, Zhitang SONG, },
journal={IEICE TRANSACTIONS on Information},
title={Temporal-Based Action Clustering for Motion Tendencies},
year={2023},
volume={E106-D},
number={8},
pages={1292-1295},
abstract={Video-based action recognition encompasses the recognition of appearance and the classification of action types. This work proposes a discrete-temporal-sequence-based motion tendency clustering framework to implement motion clustering by extracting motion tendencies and self-supervised learning. A published traffic intersection dataset (inD) and a self-produced gesture video set are used for evaluation and to validate the motion tendency action recognition hypothesis.},
keywords={},
doi={10.1587/transinf.2023EDL8001},
ISSN={1745-1361},
month={August},}
Copy
TY - JOUR
TI - Temporal-Based Action Clustering for Motion Tendencies
T2 - IEICE TRANSACTIONS on Information
SP - 1292
EP - 1295
AU - Xingyu QIAN
AU - Xiaogang CHEN
AU - Aximu YUEMAIER
AU - Shunfen LI
AU - Weibang DAI
AU - Zhitang SONG
PY - 2023
DO - 10.1587/transinf.2023EDL8001
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 8
JA - IEICE TRANSACTIONS on Information
Y1 - August 2023
AB - Video-based action recognition encompasses the recognition of appearance and the classification of action types. This work proposes a discrete-temporal-sequence-based motion tendency clustering framework to implement motion clustering by extracting motion tendencies and self-supervised learning. A published traffic intersection dataset (inD) and a self-produced gesture video set are used for evaluation and to validate the motion tendency action recognition hypothesis.
ER -