Deep neural networks (DNNs) are widely used in many applications such as image, voice, and pattern recognition. However, it has recently been shown that a DNN can be vulnerable to a small distortion in images that humans cannot distinguish. This type of attack is known as an adversarial example and is a significant threat to deep learning systems. The unknown-target-oriented generalized adversarial example that can deceive most DNN classifiers is even more threatening. We propose a generalized adversarial example attack method that can effectively attack unknown classifiers by using a hierarchical ensemble method. Our proposed scheme creates advanced ensemble adversarial examples to achieve reasonable attack success rates for unknown classifiers. Our experiment results show that the proposed method can achieve attack success rates for an unknown classifier of up to 9.25% and 18.94% higher on MNIST data and 4.1% and 13% higher on CIFAR10 data compared with the previous ensemble method and the conventional baseline method, respectively.
Hyun KWON
Korea Advanced Institute of Science and Technology
Yongchul KIM
Korea Military Academy
Ki-Woong PARK
Sejong University
Hyunsoo YOON
Korea Advanced Institute of Science and Technology
Daeseon CHOI
Kongju National University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Hyun KWON, Yongchul KIM, Ki-Woong PARK, Hyunsoo YOON, Daeseon CHOI, "Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers" in IEICE TRANSACTIONS on Information,
vol. E101-D, no. 10, pp. 2485-2500, October 2018, doi: 10.1587/transinf.2018EDP7073.
Abstract: Deep neural networks (DNNs) are widely used in many applications such as image, voice, and pattern recognition. However, it has recently been shown that a DNN can be vulnerable to a small distortion in images that humans cannot distinguish. This type of attack is known as an adversarial example and is a significant threat to deep learning systems. The unknown-target-oriented generalized adversarial example that can deceive most DNN classifiers is even more threatening. We propose a generalized adversarial example attack method that can effectively attack unknown classifiers by using a hierarchical ensemble method. Our proposed scheme creates advanced ensemble adversarial examples to achieve reasonable attack success rates for unknown classifiers. Our experiment results show that the proposed method can achieve attack success rates for an unknown classifier of up to 9.25% and 18.94% higher on MNIST data and 4.1% and 13% higher on CIFAR10 data compared with the previous ensemble method and the conventional baseline method, respectively.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2018EDP7073/_p
Copy
@ARTICLE{e101-d_10_2485,
author={Hyun KWON, Yongchul KIM, Ki-Woong PARK, Hyunsoo YOON, Daeseon CHOI, },
journal={IEICE TRANSACTIONS on Information},
title={Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers},
year={2018},
volume={E101-D},
number={10},
pages={2485-2500},
abstract={Deep neural networks (DNNs) are widely used in many applications such as image, voice, and pattern recognition. However, it has recently been shown that a DNN can be vulnerable to a small distortion in images that humans cannot distinguish. This type of attack is known as an adversarial example and is a significant threat to deep learning systems. The unknown-target-oriented generalized adversarial example that can deceive most DNN classifiers is even more threatening. We propose a generalized adversarial example attack method that can effectively attack unknown classifiers by using a hierarchical ensemble method. Our proposed scheme creates advanced ensemble adversarial examples to achieve reasonable attack success rates for unknown classifiers. Our experiment results show that the proposed method can achieve attack success rates for an unknown classifier of up to 9.25% and 18.94% higher on MNIST data and 4.1% and 13% higher on CIFAR10 data compared with the previous ensemble method and the conventional baseline method, respectively.},
keywords={},
doi={10.1587/transinf.2018EDP7073},
ISSN={1745-1361},
month={October},}
Copy
TY - JOUR
TI - Advanced Ensemble Adversarial Example on Unknown Deep Neural Network Classifiers
T2 - IEICE TRANSACTIONS on Information
SP - 2485
EP - 2500
AU - Hyun KWON
AU - Yongchul KIM
AU - Ki-Woong PARK
AU - Hyunsoo YOON
AU - Daeseon CHOI
PY - 2018
DO - 10.1587/transinf.2018EDP7073
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E101-D
IS - 10
JA - IEICE TRANSACTIONS on Information
Y1 - October 2018
AB - Deep neural networks (DNNs) are widely used in many applications such as image, voice, and pattern recognition. However, it has recently been shown that a DNN can be vulnerable to a small distortion in images that humans cannot distinguish. This type of attack is known as an adversarial example and is a significant threat to deep learning systems. The unknown-target-oriented generalized adversarial example that can deceive most DNN classifiers is even more threatening. We propose a generalized adversarial example attack method that can effectively attack unknown classifiers by using a hierarchical ensemble method. Our proposed scheme creates advanced ensemble adversarial examples to achieve reasonable attack success rates for unknown classifiers. Our experiment results show that the proposed method can achieve attack success rates for an unknown classifier of up to 9.25% and 18.94% higher on MNIST data and 4.1% and 13% higher on CIFAR10 data compared with the previous ensemble method and the conventional baseline method, respectively.
ER -