Deep neural networks (DNNs) provide excellent services in machine learning tasks such as image recognition, speech recognition, pattern recognition, and intrusion detection. However, an adversarial example created by adding a little noise to the original data can result in misclassification by the DNN and the human eye cannot tell the difference from the original data. For example, if an attacker creates a modified right-turn traffic sign that is incorrectly categorized by a DNN, an autonomous vehicle with the DNN will incorrectly classify the modified right-turn traffic sign as a U-Turn sign, while a human will correctly classify that changed sign as right turn sign. Such an adversarial example is a serious threat to a DNN. Recently, an adversarial example with multiple targets was introduced that causes misclassification by multiple models within each target class using a single modified image. However, it has the weakness that as the number of target models increases, the overall attack success rate decreases. Therefore, if there are multiple models that the attacker wishes to attack, the attacker must control the attack success rate for each model by considering the attack priority for each model. In this paper, we propose a priority adversarial example that considers the attack priority for each model in cases targeting multiple models. The proposed method controls the attack success rate for each model by adjusting the weight of the attack function in the generation process while maintaining minimal distortion. We used MNIST and CIFAR10 as data sets and Tensorflow as machine learning library. Experimental results show that the proposed method can control the attack success rate for each model by considering each model's attack priority while maintaining minimal distortion (average 3.95 and 2.45 with MNIST for targeted and untargeted attacks, respectively, and average 51.95 and 44.45 with CIFAR10 for targeted and untargeted attacks, respectively).
Hyun KWON
Korea Military Academy
Changhyun CHO
Seoul National University
Jun LEE
Hoseo University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Hyun KWON, Changhyun CHO, Jun LEE, "Priority Evasion Attack: An Adversarial Example That Considers the Priority of Attack on Each Classifier" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 11, pp. 1880-1889, November 2022, doi: 10.1587/transinf.2022NGP0002.
Abstract: Deep neural networks (DNNs) provide excellent services in machine learning tasks such as image recognition, speech recognition, pattern recognition, and intrusion detection. However, an adversarial example created by adding a little noise to the original data can result in misclassification by the DNN and the human eye cannot tell the difference from the original data. For example, if an attacker creates a modified right-turn traffic sign that is incorrectly categorized by a DNN, an autonomous vehicle with the DNN will incorrectly classify the modified right-turn traffic sign as a U-Turn sign, while a human will correctly classify that changed sign as right turn sign. Such an adversarial example is a serious threat to a DNN. Recently, an adversarial example with multiple targets was introduced that causes misclassification by multiple models within each target class using a single modified image. However, it has the weakness that as the number of target models increases, the overall attack success rate decreases. Therefore, if there are multiple models that the attacker wishes to attack, the attacker must control the attack success rate for each model by considering the attack priority for each model. In this paper, we propose a priority adversarial example that considers the attack priority for each model in cases targeting multiple models. The proposed method controls the attack success rate for each model by adjusting the weight of the attack function in the generation process while maintaining minimal distortion. We used MNIST and CIFAR10 as data sets and Tensorflow as machine learning library. Experimental results show that the proposed method can control the attack success rate for each model by considering each model's attack priority while maintaining minimal distortion (average 3.95 and 2.45 with MNIST for targeted and untargeted attacks, respectively, and average 51.95 and 44.45 with CIFAR10 for targeted and untargeted attacks, respectively).
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022NGP0002/_p
Copy
@ARTICLE{e105-d_11_1880,
author={Hyun KWON, Changhyun CHO, Jun LEE, },
journal={IEICE TRANSACTIONS on Information},
title={Priority Evasion Attack: An Adversarial Example That Considers the Priority of Attack on Each Classifier},
year={2022},
volume={E105-D},
number={11},
pages={1880-1889},
abstract={Deep neural networks (DNNs) provide excellent services in machine learning tasks such as image recognition, speech recognition, pattern recognition, and intrusion detection. However, an adversarial example created by adding a little noise to the original data can result in misclassification by the DNN and the human eye cannot tell the difference from the original data. For example, if an attacker creates a modified right-turn traffic sign that is incorrectly categorized by a DNN, an autonomous vehicle with the DNN will incorrectly classify the modified right-turn traffic sign as a U-Turn sign, while a human will correctly classify that changed sign as right turn sign. Such an adversarial example is a serious threat to a DNN. Recently, an adversarial example with multiple targets was introduced that causes misclassification by multiple models within each target class using a single modified image. However, it has the weakness that as the number of target models increases, the overall attack success rate decreases. Therefore, if there are multiple models that the attacker wishes to attack, the attacker must control the attack success rate for each model by considering the attack priority for each model. In this paper, we propose a priority adversarial example that considers the attack priority for each model in cases targeting multiple models. The proposed method controls the attack success rate for each model by adjusting the weight of the attack function in the generation process while maintaining minimal distortion. We used MNIST and CIFAR10 as data sets and Tensorflow as machine learning library. Experimental results show that the proposed method can control the attack success rate for each model by considering each model's attack priority while maintaining minimal distortion (average 3.95 and 2.45 with MNIST for targeted and untargeted attacks, respectively, and average 51.95 and 44.45 with CIFAR10 for targeted and untargeted attacks, respectively).},
keywords={},
doi={10.1587/transinf.2022NGP0002},
ISSN={1745-1361},
month={November},}
Copy
TY - JOUR
TI - Priority Evasion Attack: An Adversarial Example That Considers the Priority of Attack on Each Classifier
T2 - IEICE TRANSACTIONS on Information
SP - 1880
EP - 1889
AU - Hyun KWON
AU - Changhyun CHO
AU - Jun LEE
PY - 2022
DO - 10.1587/transinf.2022NGP0002
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 11
JA - IEICE TRANSACTIONS on Information
Y1 - November 2022
AB - Deep neural networks (DNNs) provide excellent services in machine learning tasks such as image recognition, speech recognition, pattern recognition, and intrusion detection. However, an adversarial example created by adding a little noise to the original data can result in misclassification by the DNN and the human eye cannot tell the difference from the original data. For example, if an attacker creates a modified right-turn traffic sign that is incorrectly categorized by a DNN, an autonomous vehicle with the DNN will incorrectly classify the modified right-turn traffic sign as a U-Turn sign, while a human will correctly classify that changed sign as right turn sign. Such an adversarial example is a serious threat to a DNN. Recently, an adversarial example with multiple targets was introduced that causes misclassification by multiple models within each target class using a single modified image. However, it has the weakness that as the number of target models increases, the overall attack success rate decreases. Therefore, if there are multiple models that the attacker wishes to attack, the attacker must control the attack success rate for each model by considering the attack priority for each model. In this paper, we propose a priority adversarial example that considers the attack priority for each model in cases targeting multiple models. The proposed method controls the attack success rate for each model by adjusting the weight of the attack function in the generation process while maintaining minimal distortion. We used MNIST and CIFAR10 as data sets and Tensorflow as machine learning library. Experimental results show that the proposed method can control the attack success rate for each model by considering each model's attack priority while maintaining minimal distortion (average 3.95 and 2.45 with MNIST for targeted and untargeted attacks, respectively, and average 51.95 and 44.45 with CIFAR10 for targeted and untargeted attacks, respectively).
ER -