The search functionality is under construction.
The search functionality is under construction.

Adversarial Black-Box Attacks with Timing Side-Channel Leakage

Tsunato NAKAI, Daisuke SUZUKI, Fumio OMATSU, Takeshi FUJINO

  • Full Text Views

    0

  • Cite this

Summary :

Artificial intelligence (AI), especially deep learning (DL), has been remarkable and applied to various industries. However, adversarial examples (AE), which add small perturbations to input data of deep neural networks (DNNs) for misclassification, are attracting attention. In this paper, we propose a novel black-box attack to craft AE using only processing time which is side-channel information of DNNs, without using training data, model architecture and parameters, substitute models or output probability. While, several existing black-box attacks use output probability, our attack exploits a relationship between the number of activated nodes and the processing time of DNNs. The perturbations for AE are decided by the differential processing time according to input data in our attack. We show experimental results in which our attack's AE increase the number of activated nodes and cause misclassification to one of the incorrect labels effectively. In addition, the experimental results highlight that our attack can evade gradient masking countermeasures which mask output probability to prevent crafting AE against several black-box attacks.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E104-A No.1 pp.143-151
Publication Date
2021/01/01
Publicized
Online ISSN
1745-1337
DOI
10.1587/transfun.2020CIP0022
Type of Manuscript
Special Section PAPER (Special Section on Cryptography and Information Security)
Category

Authors

Tsunato NAKAI
  Mitsubishi Electric Corporation
Daisuke SUZUKI
  Mitsubishi Electric Corporation
Fumio OMATSU
  Mitsubishi Electric Corporation
Takeshi FUJINO
  Ritsumeikan University

Keyword