In this paper, we propose improved Generative Adversarial Networks with attention module in Generator, which can enhance the effectiveness of Generator. Furthermore, recent work has shown that Generator conditioning affects GAN performance. Leveraging this insight, we explored the effect of different normalization (spectral normalization, instance normalization) on Generator and Discriminator. Moreover, an enhanced loss function called Wasserstein Divergence distance, can alleviate the problem of difficult to train module in practice.
KaiXu CHEN
Kanazawa University
Satoshi YAMANE
Kanazawa University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
KaiXu CHEN, Satoshi YAMANE, "Enhanced Full Attention Generative Adversarial Networks" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 5, pp. 813-817, May 2023, doi: 10.1587/transinf.2022DLL0007.
Abstract: In this paper, we propose improved Generative Adversarial Networks with attention module in Generator, which can enhance the effectiveness of Generator. Furthermore, recent work has shown that Generator conditioning affects GAN performance. Leveraging this insight, we explored the effect of different normalization (spectral normalization, instance normalization) on Generator and Discriminator. Moreover, an enhanced loss function called Wasserstein Divergence distance, can alleviate the problem of difficult to train module in practice.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022DLL0007/_p
Copy
@ARTICLE{e106-d_5_813,
author={KaiXu CHEN, Satoshi YAMANE, },
journal={IEICE TRANSACTIONS on Information},
title={Enhanced Full Attention Generative Adversarial Networks},
year={2023},
volume={E106-D},
number={5},
pages={813-817},
abstract={In this paper, we propose improved Generative Adversarial Networks with attention module in Generator, which can enhance the effectiveness of Generator. Furthermore, recent work has shown that Generator conditioning affects GAN performance. Leveraging this insight, we explored the effect of different normalization (spectral normalization, instance normalization) on Generator and Discriminator. Moreover, an enhanced loss function called Wasserstein Divergence distance, can alleviate the problem of difficult to train module in practice.},
keywords={},
doi={10.1587/transinf.2022DLL0007},
ISSN={1745-1361},
month={May},}
Copy
TY - JOUR
TI - Enhanced Full Attention Generative Adversarial Networks
T2 - IEICE TRANSACTIONS on Information
SP - 813
EP - 817
AU - KaiXu CHEN
AU - Satoshi YAMANE
PY - 2023
DO - 10.1587/transinf.2022DLL0007
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2023
AB - In this paper, we propose improved Generative Adversarial Networks with attention module in Generator, which can enhance the effectiveness of Generator. Furthermore, recent work has shown that Generator conditioning affects GAN performance. Leveraging this insight, we explored the effect of different normalization (spectral normalization, instance normalization) on Generator and Discriminator. Moreover, an enhanced loss function called Wasserstein Divergence distance, can alleviate the problem of difficult to train module in practice.
ER -