Visible-Infrared Person Re-identification (VI-ReID) is a challenging pedestrian retrieval task due to the huge modality discrepancy and appearance discrepancy. To address this tough task, this letter proposes a novel gray augmentation exploration (GAE) method to increase the diversity of training data and seek the best ratio of gray augmentation for learning a more focused model. Additionally, we also propose a strong all-modality center-triplet (AMCT) loss to push the features extracted from the same pedestrian more compact but those from different persons more separate. Experiments conducted on the public dataset SYSU-MM01 demonstrate the superiority of the proposed method in the VI-ReID task.
Xiaozhou CHENG
China University of Mining and Technology,Sinostell Maanshan General Institute of Mining Research Co., Ltd.
Rui LI
China University of Mining and Technology
Yanjing SUN
China University of Mining and Technology
Yu ZHOU
China University of Mining and Technology
Kaiwen DONG
China University of Mining and Technology
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Xiaozhou CHENG, Rui LI, Yanjing SUN, Yu ZHOU, Kaiwen DONG, "Gray Augmentation Exploration with All-Modality Center-Triplet Loss for Visible-Infrared Person Re-Identification" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 7, pp. 1356-1360, July 2022, doi: 10.1587/transinf.2021EDL8101.
Abstract: Visible-Infrared Person Re-identification (VI-ReID) is a challenging pedestrian retrieval task due to the huge modality discrepancy and appearance discrepancy. To address this tough task, this letter proposes a novel gray augmentation exploration (GAE) method to increase the diversity of training data and seek the best ratio of gray augmentation for learning a more focused model. Additionally, we also propose a strong all-modality center-triplet (AMCT) loss to push the features extracted from the same pedestrian more compact but those from different persons more separate. Experiments conducted on the public dataset SYSU-MM01 demonstrate the superiority of the proposed method in the VI-ReID task.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDL8101/_p
Copy
@ARTICLE{e105-d_7_1356,
author={Xiaozhou CHENG, Rui LI, Yanjing SUN, Yu ZHOU, Kaiwen DONG, },
journal={IEICE TRANSACTIONS on Information},
title={Gray Augmentation Exploration with All-Modality Center-Triplet Loss for Visible-Infrared Person Re-Identification},
year={2022},
volume={E105-D},
number={7},
pages={1356-1360},
abstract={Visible-Infrared Person Re-identification (VI-ReID) is a challenging pedestrian retrieval task due to the huge modality discrepancy and appearance discrepancy. To address this tough task, this letter proposes a novel gray augmentation exploration (GAE) method to increase the diversity of training data and seek the best ratio of gray augmentation for learning a more focused model. Additionally, we also propose a strong all-modality center-triplet (AMCT) loss to push the features extracted from the same pedestrian more compact but those from different persons more separate. Experiments conducted on the public dataset SYSU-MM01 demonstrate the superiority of the proposed method in the VI-ReID task.},
keywords={},
doi={10.1587/transinf.2021EDL8101},
ISSN={1745-1361},
month={July},}
Copy
TY - JOUR
TI - Gray Augmentation Exploration with All-Modality Center-Triplet Loss for Visible-Infrared Person Re-Identification
T2 - IEICE TRANSACTIONS on Information
SP - 1356
EP - 1360
AU - Xiaozhou CHENG
AU - Rui LI
AU - Yanjing SUN
AU - Yu ZHOU
AU - Kaiwen DONG
PY - 2022
DO - 10.1587/transinf.2021EDL8101
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2022
AB - Visible-Infrared Person Re-identification (VI-ReID) is a challenging pedestrian retrieval task due to the huge modality discrepancy and appearance discrepancy. To address this tough task, this letter proposes a novel gray augmentation exploration (GAE) method to increase the diversity of training data and seek the best ratio of gray augmentation for learning a more focused model. Additionally, we also propose a strong all-modality center-triplet (AMCT) loss to push the features extracted from the same pedestrian more compact but those from different persons more separate. Experiments conducted on the public dataset SYSU-MM01 demonstrate the superiority of the proposed method in the VI-ReID task.
ER -