The search functionality is under construction.

Keyword Search Result

[Keyword] wasserstein divergence(2hit)

1-2hit
  • Multi-Dimensional Fused Gromov Wasserstein Discrepancy for Edge-Attributed Graphs Open Access

    Keisuke KAWANO  Satoshi KOIDE  Hiroaki SHIOKAWA  Toshiyuki AMAGASA  

     
    PAPER

      Pubricized:
    2024/01/12
      Vol:
    E107-D No:5
      Page(s):
    683-693

    Graph dissimilarities provide a powerful and ubiquitous approach for applying machine learning algorithms to edge-attributed graphs. However, conventional optimal transport-based dissimilarities cannot handle edge-attributes. In this paper, we propose an optimal transport-based dissimilarity between graphs with edge-attributes. The proposed method, multi-dimensional fused Gromov-Wasserstein discrepancy (MFGW), naturally incorporates the mismatch of edge-attributes into the optimal transport theory. Unlike conventional optimal transport-based dissimilarities, MFGW can directly handle edge-attributes in addition to structural information of graphs. Furthermore, we propose an iterative algorithm, which can be computed on GPUs, to solve non-convex quadratic programming problems involved in MFGW.  Experimentally, we demonstrate that MFGW outperforms the conventional optimal transport-based dissimilarity in several machine learning applications including supervised classification, subgraph matching, and graph barycenter calculation.

  • Enhanced Full Attention Generative Adversarial Networks

    KaiXu CHEN  Satoshi YAMANE  

     
    LETTER-Core Methods

      Pubricized:
    2023/01/12
      Vol:
    E106-D No:5
      Page(s):
    813-817

    In this paper, we propose improved Generative Adversarial Networks with attention module in Generator, which can enhance the effectiveness of Generator. Furthermore, recent work has shown that Generator conditioning affects GAN performance. Leveraging this insight, we explored the effect of different normalization (spectral normalization, instance normalization) on Generator and Discriminator. Moreover, an enhanced loss function called Wasserstein Divergence distance, can alleviate the problem of difficult to train module in practice.