Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image-classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat to traffic-sign-recognition DNNs used in advanced driver assistance systems. We previously proposed an attack method for generating a noise area on images by superimposing an electrical signal on the mobile industry processor interface and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. Therefore, we propose a misclassification attack method n DNNs by creating AEs that include small perturbations to multiple places on the image by the fault injection. The perturbation position for AEs is pre-calculated in advance against the target traffic-sign image, which will be captured on future driving. With 5.2% to 5.5% of a specific image on the simulation, the perturbation that induces misclassification to the target label was calculated. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured with. In addition, we created robust AEs that cause misclassification of images with varying positions and size by adding a common perturbation. We propose a method to reduce the amount of robust AEs perturbation. Our results demonstrated successful misclassification of the captured image with a high attack success rate even if the position and size of the captured image are slightly changed.
Tatsuya OYAMA
Ritsumeikan University
Kota YOSHIDA
Ritsumeikan University
Shunsuke OKURA
Ritsumeikan University
Takeshi FUJINO
Ritsumeikan University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Tatsuya OYAMA, Kota YOSHIDA, Shunsuke OKURA, Takeshi FUJINO, "Adversarial Examples Created by Fault Injection Attack on Image Sensor Interface" in IEICE TRANSACTIONS on Fundamentals,
vol. E107-A, no. 3, pp. 344-354, March 2024, doi: 10.1587/transfun.2023CIP0025.
Abstract: Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image-classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat to traffic-sign-recognition DNNs used in advanced driver assistance systems. We previously proposed an attack method for generating a noise area on images by superimposing an electrical signal on the mobile industry processor interface and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. Therefore, we propose a misclassification attack method n DNNs by creating AEs that include small perturbations to multiple places on the image by the fault injection. The perturbation position for AEs is pre-calculated in advance against the target traffic-sign image, which will be captured on future driving. With 5.2% to 5.5% of a specific image on the simulation, the perturbation that induces misclassification to the target label was calculated. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured with. In addition, we created robust AEs that cause misclassification of images with varying positions and size by adding a common perturbation. We propose a method to reduce the amount of robust AEs perturbation. Our results demonstrated successful misclassification of the captured image with a high attack success rate even if the position and size of the captured image are slightly changed.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2023CIP0025/_p
Copy
@ARTICLE{e107-a_3_344,
author={Tatsuya OYAMA, Kota YOSHIDA, Shunsuke OKURA, Takeshi FUJINO, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Adversarial Examples Created by Fault Injection Attack on Image Sensor Interface},
year={2024},
volume={E107-A},
number={3},
pages={344-354},
abstract={Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image-classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat to traffic-sign-recognition DNNs used in advanced driver assistance systems. We previously proposed an attack method for generating a noise area on images by superimposing an electrical signal on the mobile industry processor interface and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. Therefore, we propose a misclassification attack method n DNNs by creating AEs that include small perturbations to multiple places on the image by the fault injection. The perturbation position for AEs is pre-calculated in advance against the target traffic-sign image, which will be captured on future driving. With 5.2% to 5.5% of a specific image on the simulation, the perturbation that induces misclassification to the target label was calculated. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured with. In addition, we created robust AEs that cause misclassification of images with varying positions and size by adding a common perturbation. We propose a method to reduce the amount of robust AEs perturbation. Our results demonstrated successful misclassification of the captured image with a high attack success rate even if the position and size of the captured image are slightly changed.},
keywords={},
doi={10.1587/transfun.2023CIP0025},
ISSN={1745-1337},
month={March},}
Copy
TY - JOUR
TI - Adversarial Examples Created by Fault Injection Attack on Image Sensor Interface
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 344
EP - 354
AU - Tatsuya OYAMA
AU - Kota YOSHIDA
AU - Shunsuke OKURA
AU - Takeshi FUJINO
PY - 2024
DO - 10.1587/transfun.2023CIP0025
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E107-A
IS - 3
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - March 2024
AB - Adversarial examples (AEs), which cause misclassification by adding subtle perturbations to input images, have been proposed as an attack method on image-classification systems using deep neural networks (DNNs). Physical AEs created by attaching stickers to traffic signs have been reported, which are a threat to traffic-sign-recognition DNNs used in advanced driver assistance systems. We previously proposed an attack method for generating a noise area on images by superimposing an electrical signal on the mobile industry processor interface and showed that it can generate a single adversarial mark that triggers a backdoor attack on the input image. Therefore, we propose a misclassification attack method n DNNs by creating AEs that include small perturbations to multiple places on the image by the fault injection. The perturbation position for AEs is pre-calculated in advance against the target traffic-sign image, which will be captured on future driving. With 5.2% to 5.5% of a specific image on the simulation, the perturbation that induces misclassification to the target label was calculated. As the experimental results, we confirmed that the traffic-sign-recognition DNN on a Raspberry Pi was successfully misclassified when the target traffic sign was captured with. In addition, we created robust AEs that cause misclassification of images with varying positions and size by adding a common perturbation. We propose a method to reduce the amount of robust AEs perturbation. Our results demonstrated successful misclassification of the captured image with a high attack success rate even if the position and size of the captured image are slightly changed.
ER -