The search functionality is under construction.
The search functionality is under construction.

Toward Selective Adversarial Attack for Gait Recognition Systems Based on Deep Neural Network

Hyun KWON

  • Full Text Views

    0

  • Cite this

Summary :

Deep neural networks (DNNs) perform well for image recognition, speech recognition, and pattern analysis. However, such neural networks are vulnerable to adversarial examples. An adversarial example is a data sample created by adding a small amount of noise to an original sample in such a way that it is difficult for humans to identify but that will cause the sample to be misclassified by a target model. In a military environment, adversarial examples that are correctly classified by a friendly model while deceiving an enemy model may be useful. In this paper, we propose a method for generating a selective adversarial example that is correctly classified by a friendly gait recognition system and misclassified by an enemy gait recognition system. The proposed scheme generates the selective adversarial example by combining the loss for correct classification by the friendly gait recognition system with the loss for misclassification by the enemy gait recognition system. In our experiments, we used the CASIA Gait Database as the dataset and TensorFlow as the machine learning library. The results show that the proposed method can generate selective adversarial examples that have a 98.5% attack success rate against an enemy gait recognition system and are classified with 87.3% accuracy by a friendly gait recognition system.

Publication
IEICE TRANSACTIONS on Information Vol.E106-D No.2 pp.262-266
Publication Date
2023/02/01
Publicized
2022/11/07
Online ISSN
1745-1361
DOI
10.1587/transinf.2021EDL8080
Type of Manuscript
LETTER
Category
Information Network

Authors

Hyun KWON
  Korea Military Academy

Keyword