The search functionality is under construction.

Author Search Result

[Author] Yongheng MA(1hit)

1-1hit
  • Modality-Fused Graph Network for Cross-Modal Retrieval

    Fei WU  Shuaishuai LI  Guangchuan PENG  Yongheng MA  Xiao-Yuan JING  

     
    LETTER-Pattern Recognition

      Pubricized:
    2023/02/09
      Vol:
    E106-D No:5
      Page(s):
    1094-1097

    Cross-modal hashing technology has attracted much attention for its favorable retrieval performance and low storage cost. However, for existing cross-modal hashing methods, the heterogeneity of data across modalities is still a challenge and how to fully explore and utilize the intra-modality features has not been well studied. In this paper, we propose a novel cross-modal hashing approach called Modality-fused Graph Network (MFGN). The network architecture consists of a text channel and an image channel that are used to learn modality-specific features, and a modality fusion channel that uses the graph network to learn the modality-shared representations to reduce the heterogeneity across modalities. In addition, an integration module is introduced for the image and text channels to fully explore intra-modality features. Experiments on two widely used datasets show that our approach achieves better results than the state-of-the-art cross-modal hashing methods.