The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

Deep Attention Residual Hashing

Yang LI, Zhuang MIAO, Ming HE, Yafei ZHANG, Hang LI

  • Full Text Views

    0

  • Cite this

Summary :

How to represent images into highly compact binary codes is a critical issue in many computer vision tasks. Existing deep hashing methods typically focus on designing loss function by using pairwise or triplet labels. However, these methods ignore the attention mechanism in the human visual system. In this letter, we propose a novel Deep Attention Residual Hashing (DARH) method, which directly learns hash codes based on a simple pointwise classification loss function. Compared to previous methods, our method does not need to generate all possible pairwise or triplet labels from the training dataset. Specifically, we develop a new type of attention layer which can learn human eye fixation and significantly improves the representation ability of hash codes. In addition, we embedded the attention layer into the residual network to simultaneously learn discriminative image features and hash codes in an end-to-end manner. Extensive experiments on standard benchmarks demonstrate that our method preserves the instance-level similarity and outperforms state-of-the-art deep hashing methods in the image retrieval application.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E101-A No.3 pp.654-657
Publication Date
2018/03/01
Publicized
Online ISSN
1745-1337
DOI
10.1587/transfun.E101.A.654
Type of Manuscript
LETTER
Category
Image

Authors

Yang LI
  PLA University of Science and Technology (PLAUST)
Zhuang MIAO
  PLA University of Science and Technology (PLAUST)
Ming HE
  PLA University of Science and Technology (PLAUST)
Yafei ZHANG
  PLA University of Science and Technology (PLAUST)
Hang LI
  PLA University of Science and Technology (PLAUST)

Keyword