This paper presents a technique that analyzes pedestrians' attributes such as gender and bag-possession status from surveillance video. One of the technically challenging issues is that we use only top-view camera images to protect privacy. The shape features over the frames are extracted by bag-of-features (BoF) using histogram of oriented gradients (HoG) vectors. In order to enhance the classification accuracy, a two-staged classification framework is presented. Multiple classifiers are trained by changing the parameters in the first stage. The outputs from the first stage is further trained and classified in the second stage classifier. The experiments using 60-minute video captured at Haneda Airport, Japan, show that the accuracies for the gender classification and the bag-possession classification were 95.8% and 97.2%, respectively, which is a significant improvement from our previous work.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Toshihiko YAMASAKI, Tomoaki MATSUNAMI, Tuhan CHEN, "Human Attribute Analysis Using a Top-View Camera Based on Two-Stage Classification" in IEICE TRANSACTIONS on Information,
vol. E96-D, no. 4, pp. 993-996, April 2013, doi: 10.1587/transinf.E96.D.993.
Abstract: This paper presents a technique that analyzes pedestrians' attributes such as gender and bag-possession status from surveillance video. One of the technically challenging issues is that we use only top-view camera images to protect privacy. The shape features over the frames are extracted by bag-of-features (BoF) using histogram of oriented gradients (HoG) vectors. In order to enhance the classification accuracy, a two-staged classification framework is presented. Multiple classifiers are trained by changing the parameters in the first stage. The outputs from the first stage is further trained and classified in the second stage classifier. The experiments using 60-minute video captured at Haneda Airport, Japan, show that the accuracies for the gender classification and the bag-possession classification were 95.8% and 97.2%, respectively, which is a significant improvement from our previous work.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E96.D.993/_p
Copy
@ARTICLE{e96-d_4_993,
author={Toshihiko YAMASAKI, Tomoaki MATSUNAMI, Tuhan CHEN, },
journal={IEICE TRANSACTIONS on Information},
title={Human Attribute Analysis Using a Top-View Camera Based on Two-Stage Classification},
year={2013},
volume={E96-D},
number={4},
pages={993-996},
abstract={This paper presents a technique that analyzes pedestrians' attributes such as gender and bag-possession status from surveillance video. One of the technically challenging issues is that we use only top-view camera images to protect privacy. The shape features over the frames are extracted by bag-of-features (BoF) using histogram of oriented gradients (HoG) vectors. In order to enhance the classification accuracy, a two-staged classification framework is presented. Multiple classifiers are trained by changing the parameters in the first stage. The outputs from the first stage is further trained and classified in the second stage classifier. The experiments using 60-minute video captured at Haneda Airport, Japan, show that the accuracies for the gender classification and the bag-possession classification were 95.8% and 97.2%, respectively, which is a significant improvement from our previous work.},
keywords={},
doi={10.1587/transinf.E96.D.993},
ISSN={1745-1361},
month={April},}
Copy
TY - JOUR
TI - Human Attribute Analysis Using a Top-View Camera Based on Two-Stage Classification
T2 - IEICE TRANSACTIONS on Information
SP - 993
EP - 996
AU - Toshihiko YAMASAKI
AU - Tomoaki MATSUNAMI
AU - Tuhan CHEN
PY - 2013
DO - 10.1587/transinf.E96.D.993
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E96-D
IS - 4
JA - IEICE TRANSACTIONS on Information
Y1 - April 2013
AB - This paper presents a technique that analyzes pedestrians' attributes such as gender and bag-possession status from surveillance video. One of the technically challenging issues is that we use only top-view camera images to protect privacy. The shape features over the frames are extracted by bag-of-features (BoF) using histogram of oriented gradients (HoG) vectors. In order to enhance the classification accuracy, a two-staged classification framework is presented. Multiple classifiers are trained by changing the parameters in the first stage. The outputs from the first stage is further trained and classified in the second stage classifier. The experiments using 60-minute video captured at Haneda Airport, Japan, show that the accuracies for the gender classification and the bag-possession classification were 95.8% and 97.2%, respectively, which is a significant improvement from our previous work.
ER -