Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods called consistency regularization. VAT utilizes adversarial samples, generated by injecting perturbation in the input space, for training and thereby enhances the generalization ability of a classifier. However, such adversarial samples can be generated only within a very small area around the input data point, which limits the adversarial effectiveness of such samples. To address this problem we propose LVAT (Latent space VAT), which injects perturbation in the latent space instead of the input space. LVAT can generate adversarial samples flexibly, resulting in more adverse effect and thus more effective regularization. The latent space is built by a generative model, and in this paper we examine two different type of models: variational auto-encoder and normalizing flow, specifically Glow. We evaluated the performance of our method in both supervised and semi-supervised learning scenarios for an image classification task using SVHN and CIFAR-10 datasets. In our evaluation, we found that our method outperforms VAT and other state-of-the-art methods.
Genki OSADA
Philips Co-Creation Center,University of Tsukuba,I Dragon Corporation
Budrul AHSAN
Philips Co-Creation Center,The Tokyo Foundation for Policy Research
Revoti PRASAD BORA
Lowe's Services India Pvt. Ltd.
Takashi NISHIDE
University of Tsukuba
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Genki OSADA, Budrul AHSAN, Revoti PRASAD BORA, Takashi NISHIDE, "Latent Space Virtual Adversarial Training for Supervised and Semi-Supervised Learning" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 3, pp. 667-678, March 2022, doi: 10.1587/transinf.2021EDP7161.
Abstract: Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods called consistency regularization. VAT utilizes adversarial samples, generated by injecting perturbation in the input space, for training and thereby enhances the generalization ability of a classifier. However, such adversarial samples can be generated only within a very small area around the input data point, which limits the adversarial effectiveness of such samples. To address this problem we propose LVAT (Latent space VAT), which injects perturbation in the latent space instead of the input space. LVAT can generate adversarial samples flexibly, resulting in more adverse effect and thus more effective regularization. The latent space is built by a generative model, and in this paper we examine two different type of models: variational auto-encoder and normalizing flow, specifically Glow. We evaluated the performance of our method in both supervised and semi-supervised learning scenarios for an image classification task using SVHN and CIFAR-10 datasets. In our evaluation, we found that our method outperforms VAT and other state-of-the-art methods.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7161/_p
Copy
@ARTICLE{e105-d_3_667,
author={Genki OSADA, Budrul AHSAN, Revoti PRASAD BORA, Takashi NISHIDE, },
journal={IEICE TRANSACTIONS on Information},
title={Latent Space Virtual Adversarial Training for Supervised and Semi-Supervised Learning},
year={2022},
volume={E105-D},
number={3},
pages={667-678},
abstract={Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods called consistency regularization. VAT utilizes adversarial samples, generated by injecting perturbation in the input space, for training and thereby enhances the generalization ability of a classifier. However, such adversarial samples can be generated only within a very small area around the input data point, which limits the adversarial effectiveness of such samples. To address this problem we propose LVAT (Latent space VAT), which injects perturbation in the latent space instead of the input space. LVAT can generate adversarial samples flexibly, resulting in more adverse effect and thus more effective regularization. The latent space is built by a generative model, and in this paper we examine two different type of models: variational auto-encoder and normalizing flow, specifically Glow. We evaluated the performance of our method in both supervised and semi-supervised learning scenarios for an image classification task using SVHN and CIFAR-10 datasets. In our evaluation, we found that our method outperforms VAT and other state-of-the-art methods.},
keywords={},
doi={10.1587/transinf.2021EDP7161},
ISSN={1745-1361},
month={March},}
Copy
TY - JOUR
TI - Latent Space Virtual Adversarial Training for Supervised and Semi-Supervised Learning
T2 - IEICE TRANSACTIONS on Information
SP - 667
EP - 678
AU - Genki OSADA
AU - Budrul AHSAN
AU - Revoti PRASAD BORA
AU - Takashi NISHIDE
PY - 2022
DO - 10.1587/transinf.2021EDP7161
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 3
JA - IEICE TRANSACTIONS on Information
Y1 - March 2022
AB - Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods called consistency regularization. VAT utilizes adversarial samples, generated by injecting perturbation in the input space, for training and thereby enhances the generalization ability of a classifier. However, such adversarial samples can be generated only within a very small area around the input data point, which limits the adversarial effectiveness of such samples. To address this problem we propose LVAT (Latent space VAT), which injects perturbation in the latent space instead of the input space. LVAT can generate adversarial samples flexibly, resulting in more adverse effect and thus more effective regularization. The latent space is built by a generative model, and in this paper we examine two different type of models: variational auto-encoder and normalizing flow, specifically Glow. We evaluated the performance of our method in both supervised and semi-supervised learning scenarios for an image classification task using SVHN and CIFAR-10 datasets. In our evaluation, we found that our method outperforms VAT and other state-of-the-art methods.
ER -