The search functionality is under construction.

Keyword Search Result

[Keyword] approximate nearest neighbor(4hit)

1-4hit
  • Speeding up Extreme Multi-Label Classifier by Approximate Nearest Neighbor Search

    Yukihiro TAGAMI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/08/06
      Vol:
    E101-D No:11
      Page(s):
    2784-2794

    Extreme multi-label classification methods have been widely used in Web-scale classification tasks such as Web page tagging and product recommendation. In this paper, we present a novel graph embedding method called “AnnexML”. At the training step, AnnexML constructs a k-nearest neighbor graph of label vectors and attempts to reproduce the graph structure in the embedding space. The prediction is efficiently performed by using an approximate nearest neighbor search method that efficiently explores the learned k-nearest neighbor graph in the embedding space. We conducted evaluations on several large-scale real-world data sets and compared our method with recent state-of-the-art methods. Experimental results show that our AnnexML can significantly improve prediction accuracy, especially on data sets that have a larger label space. In addition, AnnexML improves the trade-off between prediction time and accuracy. At the same level of accuracy, the prediction time of AnnexML was up to 58 times faster than that of SLEEC, a state-of-the-art embedding-based method.

  • Multiple Binary Codes for Fast Approximate Similarity Search

    Shinichi SHIRAKAWA  

     
    PAPER-Pattern Recognition

      Pubricized:
    2014/12/11
      Vol:
    E98-D No:3
      Page(s):
    671-680

    One of the fast approximate similarity search techniques is a binary hashing method that transforms a real-valued vector into a binary code. The similarity between two binary codes is measured by their Hamming distance. In this method, a hash table is often used when undertaking a constant-time similarity search. The number of accesses to the hash table, however, increases when the number of bits lengthens. In this paper, we consider a method that does not access data with a long Hamming radius by using multiple binary codes. Further, we attempt to integrate the proposed approach and the existing multi-index hashing (MIH) method to accelerate the performance of the similarity search in the Hamming space. Then, we propose a learning method of the binary hash functions for multiple binary codes. We conduct an experiment on similarity search utilizing a dataset of up to 50 million items and show that our proposed method achieves a faster similarity search than that possible with the conventional linear scan and hash table search.

  • Approximate Nearest Neighbor Based Feature Quantization Algorithm for Robust Hashing

    Yue nan LI  Hao LUO  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E95-D No:12
      Page(s):
    3109-3112

    In this letter, the problem of feature quantization in robust hashing is studied from the perspective of approximate nearest neighbor (ANN). We model the features of perceptually identical media as ANNs in the feature set and show that ANN indexing can well meet the robustness and discrimination requirements of feature quantization. A feature quantization algorithm is then developed by exploiting the random-projection based ANN indexing. For performance study, the distortion tolerance and randomness of the quantizer are analytically derived. Experimental results demonstrate that the proposed work is superior to state-of-the-art quantizers, and its random nature can provide robust hashing with security against hash forgery.

  • Quantization-Based Approximate Nearest Neighbor Search with Optimized Multiple Residual Codebooks

    Yusuke UCHIDA  Koichi TAKAGI  Ryoichi KAWADA  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E94-D No:7
      Page(s):
    1510-1514

    Nearest neighbor search (NNS) among large-scale and high-dimensional vectors plays an important role in recent large-scale multimedia search applications. This paper proposes an optimized multiple codebook construction method for an approximate NNS scheme based on product quantization, where sets of residual sub-vectors are clustered according to their distribution and the codebooks for product quantization are constructed from these clusters. Our approach enables us to adaptively select the number of codebooks to be used by trading between the search accuracy and the amount of memory available.