The search functionality is under construction.
The search functionality is under construction.

Open Access
Experimental Study of Fault Injection Attack on Image Sensor Interface for Triggering Backdoored DNN Models

Tatsuya OYAMA, Shunsuke OKURA, Kota YOSHIDA, Takeshi FUJINO

  • Full Text Views

    135

  • Cite this
  • Free PDF (3.8MB)

Summary :

A backdoor attack is a type of attack method inducing deep neural network (DNN) misclassification. An adversary mixes poison data, which consist of images tampered with adversarial marks at specific locations and of adversarial target classes, into a training dataset. The backdoor model classifies only images with adversarial marks into an adversarial target class and other images into the correct classes. However, the attack performance degrades sharply when the location of the adversarial marks is slightly shifted. An adversarial mark that induces the misclassification of a DNN is usually applied when a picture is taken, so the backdoor attack will have difficulty succeeding in the physical world because the adversarial mark position fluctuates. This paper proposes a new approach in which an adversarial mark is applied using fault injection on the mobile industry processor interface (MIPI) between an image sensor and the image recognition processor. Two independent attack drivers are electrically connected to the MIPI data lane in our attack system. While almost all image signals are transferred from the sensor to the processor without tampering by canceling the attack signal between the two drivers, the adversarial mark is injected into a given location of the image signal by activating the attack signal generated by the two attack drivers. In an experiment, the DNN was implemented on a Raspberry pi 4 to classify MNIST handwritten images transferred from the image sensor over the MIPI. The adversarial mark successfully appeared in a specific small part of the MNIST images using our attack system. The success rate of the backdoor attack using this adversarial mark was 91%, which is much higher than the 18% rate achieved using conventional input image tampering.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E105-A No.3 pp.336-343
Publication Date
2022/03/01
Publicized
2021/10/26
Online ISSN
1745-1337
DOI
10.1587/transfun.2021CIP0019
Type of Manuscript
Special Section PAPER (Special Section on Cryptography and Information Security)
Category

Authors

Tatsuya OYAMA
  Ritsumeikan University
Shunsuke OKURA
  Ritsumeikan University
Kota YOSHIDA
  Ritsumeikan University
Takeshi FUJINO
  Ritsumeikan University

Keyword