This paper investigates the effects of backdoor attacks on graph neural networks (GNNs) trained through simple data augmentation by modifying the edges of the graph in graph classification. The numerical results show that GNNs trained with data augmentation remain vulnerable to backdoor attacks and may even be more vulnerable to such attacks than GNNs without data augmentation.
Shingo YASHIKI
Toyohashi University of Technology
Chako TAKAHASHI
Yamagata University
Koutarou SUZUKI
Toyohashi University of Technology
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Shingo YASHIKI, Chako TAKAHASHI, Koutarou SUZUKI, "Backdoor Attacks on Graph Neural Networks Trained with Data Augmentation" in IEICE TRANSACTIONS on Fundamentals,
vol. E107-A, no. 3, pp. 355-358, March 2024, doi: 10.1587/transfun.2023CIL0007.
Abstract: This paper investigates the effects of backdoor attacks on graph neural networks (GNNs) trained through simple data augmentation by modifying the edges of the graph in graph classification. The numerical results show that GNNs trained with data augmentation remain vulnerable to backdoor attacks and may even be more vulnerable to such attacks than GNNs without data augmentation.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2023CIL0007/_p
Copy
@ARTICLE{e107-a_3_355,
author={Shingo YASHIKI, Chako TAKAHASHI, Koutarou SUZUKI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Backdoor Attacks on Graph Neural Networks Trained with Data Augmentation},
year={2024},
volume={E107-A},
number={3},
pages={355-358},
abstract={This paper investigates the effects of backdoor attacks on graph neural networks (GNNs) trained through simple data augmentation by modifying the edges of the graph in graph classification. The numerical results show that GNNs trained with data augmentation remain vulnerable to backdoor attacks and may even be more vulnerable to such attacks than GNNs without data augmentation.},
keywords={},
doi={10.1587/transfun.2023CIL0007},
ISSN={1745-1337},
month={March},}
Copy
TY - JOUR
TI - Backdoor Attacks on Graph Neural Networks Trained with Data Augmentation
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 355
EP - 358
AU - Shingo YASHIKI
AU - Chako TAKAHASHI
AU - Koutarou SUZUKI
PY - 2024
DO - 10.1587/transfun.2023CIL0007
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E107-A
IS - 3
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - March 2024
AB - This paper investigates the effects of backdoor attacks on graph neural networks (GNNs) trained through simple data augmentation by modifying the edges of the graph in graph classification. The numerical results show that GNNs trained with data augmentation remain vulnerable to backdoor attacks and may even be more vulnerable to such attacks than GNNs without data augmentation.
ER -