Vision based human action recognition has been an active research field in recent years. Exemplar matching is an important and popular methodology in this field, however, most previous works perform exemplar matching on the whole input video clip for recognition. Such a strategy is computationally expensive and limits its practical usage. In this paper, we present a martingale framework for selection of characteristic frames from an input video clip without requiring any prior knowledge. Action recognition is operated on these selected characteristic frames. Experiments on 10 studied actions from WEIZMANN dataset demonstrate a significant improvement in computational efficiency (54% reduction) while achieving the same recognition precision.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Guoliang LU, Mineichi KUDO, Jun TOYAMA, "Selection of Characteristic Frames in Video for Efficient Action Recognition" in IEICE TRANSACTIONS on Information,
vol. E95-D, no. 10, pp. 2514-2521, October 2012, doi: 10.1587/transinf.E95.D.2514.
Abstract: Vision based human action recognition has been an active research field in recent years. Exemplar matching is an important and popular methodology in this field, however, most previous works perform exemplar matching on the whole input video clip for recognition. Such a strategy is computationally expensive and limits its practical usage. In this paper, we present a martingale framework for selection of characteristic frames from an input video clip without requiring any prior knowledge. Action recognition is operated on these selected characteristic frames. Experiments on 10 studied actions from WEIZMANN dataset demonstrate a significant improvement in computational efficiency (54% reduction) while achieving the same recognition precision.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E95.D.2514/_p
Copy
@ARTICLE{e95-d_10_2514,
author={Guoliang LU, Mineichi KUDO, Jun TOYAMA, },
journal={IEICE TRANSACTIONS on Information},
title={Selection of Characteristic Frames in Video for Efficient Action Recognition},
year={2012},
volume={E95-D},
number={10},
pages={2514-2521},
abstract={Vision based human action recognition has been an active research field in recent years. Exemplar matching is an important and popular methodology in this field, however, most previous works perform exemplar matching on the whole input video clip for recognition. Such a strategy is computationally expensive and limits its practical usage. In this paper, we present a martingale framework for selection of characteristic frames from an input video clip without requiring any prior knowledge. Action recognition is operated on these selected characteristic frames. Experiments on 10 studied actions from WEIZMANN dataset demonstrate a significant improvement in computational efficiency (54% reduction) while achieving the same recognition precision.},
keywords={},
doi={10.1587/transinf.E95.D.2514},
ISSN={1745-1361},
month={October},}
Copy
TY - JOUR
TI - Selection of Characteristic Frames in Video for Efficient Action Recognition
T2 - IEICE TRANSACTIONS on Information
SP - 2514
EP - 2521
AU - Guoliang LU
AU - Mineichi KUDO
AU - Jun TOYAMA
PY - 2012
DO - 10.1587/transinf.E95.D.2514
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E95-D
IS - 10
JA - IEICE TRANSACTIONS on Information
Y1 - October 2012
AB - Vision based human action recognition has been an active research field in recent years. Exemplar matching is an important and popular methodology in this field, however, most previous works perform exemplar matching on the whole input video clip for recognition. Such a strategy is computationally expensive and limits its practical usage. In this paper, we present a martingale framework for selection of characteristic frames from an input video clip without requiring any prior knowledge. Action recognition is operated on these selected characteristic frames. Experiments on 10 studied actions from WEIZMANN dataset demonstrate a significant improvement in computational efficiency (54% reduction) while achieving the same recognition precision.
ER -