Soh YOSHIDA Mitsuji MUNEYASU Takahiro OGAWA Miki HASEYAMA
In this paper, we address the problem of analyzing topics, included in a social video group, to improve the retrieval performance of videos. Unlike previous methods that focused on an individual visual aspect of videos, the proposed method aims to leverage the “mutual reinforcement” of heterogeneous modalities such as tags and users associated with video on the Internet. To represent multiple types of relationships between each heterogeneous modality, the proposed method constructs three subgraphs: user-tag, video-video, and video-tag graphs. We combine the three types of graphs to obtain a heterogeneous graph. Then the extraction of latent features, i.e., topics, becomes feasible by applying graph-based soft clustering to the heterogeneous graph. By estimating the membership of each grouped cluster for each video, the proposed method defines a new video similarity measure. Since the understanding of video content is enhanced by exploiting latent features obtained from different types of data that complement each other, the performance of visual reranking is improved by the proposed method. Results of experiments on a video dataset that consists of YouTube-8M videos show the effectiveness of the proposed method, which achieves a 24.3% improvement in terms of the mean normalized discounted cumulative gain in a search ranking task compared with the baseline method.
Takahiro OGAWA Keisuke MAEDA Miki HASEYAMA
An inpainting method via sparse representation based on a new phaseless quality metric is presented in this paper. Since power spectra, phaseless features, of local regions within images enable more successful representation of their texture characteristics compared to their pixel values, a new quality metric based on these phaseless features is newly derived for image representation. Specifically, the proposed method enables spare representation of target signals, i.e., target patches, including missing intensities by monitoring errors converged by phase retrieval as the novel phaseless quality metric. This is the main contribution of our study. In this approach, the phase retrieval algorithm used in our method has the following two important roles: (1) derivation of the new quality metric that can be derived even for images including missing intensities and (2) conversion of phaseless features, i.e., power spectra, to pixel values, i.e., intensities. Therefore, the above novel approach solves the existing problem of not being able to use better features or better quality metrics for inpainting. Results of experiments showed that the proposed method using sparse representation based on the new phaseless quality metric outperforms previously reported methods that directly use pixel values for inpainting.
Kaoru KATAYAMA Takashi HIRASHIMA
We present a retrieval method for 3D CAD assemblies consisted of multiple components. The proposed method distinguishes not only shapes of 3D CAD assemblies but also layouts of their components. Similarity between two assemblies is computed from feature quantities of the components constituting the assemblies. In order to make the similarity robust to translation and rotation of an assembly in 3D space, we use the 3D Radon transform and the spherical harmonic transform. We show that this method has better retrieval precision and efficiency than targets for comparison by experimental evaluation.
To enhance cover song identification accuracy on a large-size music archive, a song-level feature summarization method is proposed by using multi-scale representation. The chroma n-grams are extracted in multiple scales to cope with both global and local tempo changes. We derive index from the extracted n-grams by clustering to reduce storage and computation for DB search. Experiments on the widely used music datasets confirmed that the proposed method achieves the state-of-the-art accuracy while reducing cost for cover song search.
Takamasa FUJII Soh YOSHIDA Mitsuji MUNEYASU
In video search reranking, in addition to the well-known semantic gap, the intent gap, which is the gap between the representation of the users' demand and the real search intention, is becoming a major problem restricting the improvement of reranking performance. To address this problem, we propose video search reranking based on a semantic representation by multiple tags. In the proposed method, we use relevance feedback, which the user can interact with by specifying some example videos from the initial search results. We apply the relevance feedback to reduce the gap between the real intent of the users and the video search results. In addition, we focus on the fact that multiple tags are used to represent video contents. By vectorizing multiple tags associated with videos on the basis of the Word2Vec algorithm and calculating the centroid of the tag vector as a collective representation, we can evaluate the semantic similarity between videos by using tag features. We conduct experiments on the YouTube-8M dataset, and the results show that our reranking approach is effective and efficient.
Kazuya OSE Kazunori IWATA Nobuo SUEMATSU
Consider selecting points on a contour in the x-y plane. In shape analysis, this is frequently referred to as contour sampling. It is important to select the points such that they effectively represent the shape of the contour. Generally, the stroke order and number of strokes are informative for that purpose. Several effective methods exist for sampling contours drawn with a certain stroke order and number of strokes, such as the English alphabet or Arabic figures. However, many contours entail an uncertain stroke order and number of strokes, such as pictures of symbols, and little research has focused on methods for sampling such contours. This is because selecting the points in this case typically requires a large computational cost to check all the possible choices. In this paper, we present a sampling method that is useful regardless of whether the contours are drawn with a certain stroke order and number of strokes or not. Our sampling method thereby expands the application possibilities of contour processing. We formulate contour sampling as a discrete optimization problem that can be solved using a type of direct search. Based on a geometric graph whose vertices are the points and whose edges form rectangles, we construct an effective objective function for the problem. Using different shape datasets, we demonstrate that our sampling method is effective with respect to shape representation and retrieval.
We propose a method to find assembly models contained in another assembly model given as a query from a set of 3D CAD assembly models. A 3D CAD assembly model consists of multiple components and is constructed using a 3D CAD software. The proposed method distinguishes assembly models which consist of a subset of components constituting the query model and also whose components have the same layout as the subset of the components. We compute difference between the shapes and the layouts of the components from the sinograms which are constructed by the Radon transform of their projections from various angles. We evaluate the proposed method experimentally using the assembly models which we prepare as a benchmark. The proposed method can also be used to find the database models which contains a query model.
Jingjing SI Jing XIANG Yinbo CHENG Kai LIU
Generalized approximate message passing (GAMP) can be applied to compressive phase retrieval (CPR) with excellent phase-transition behavior. In this paper, we introduced the cartoon-texture model into the denoising-based phase retrieval GAMP(D-prGAMP), and proposed a cartoon-texture model based D-prGAMP (C-T D-prGAMP) algorithm. Then, based on experiments and analyses on the variations of the performance of D-PrGAMP algorithms with iterations, we proposed a 2-stage D-prGAMP algorithm, which makes tradeoffs between the C-T D-prGAMP algorithm and general D-prGAMP algorithms. Finally, facing the non-convergence issues of D-prGAMP, we incorporated adaptive damping to 2-stage D-prGAMP, and proposed the adaptively damped 2-stage D-prGAMP (2-stage ADD-prGAMP) algorithm. Simulation results show that, runtime of 2-stage D-prGAMP is relatively equivalent to that of BM3D-prGAMP, but 2-stage D-prGAMP can achieve higher image reconstruction quality than BM3D-prGAMP. 2-stage ADD-prGAMP spends more reconstruction time than 2-stage D-prGAMP and BM3D-prGAMP. But, 2-stage ADD-prGAMP can achieve PSNRs 0.2∼3dB higher than those of 2-stage D-prGAMP and 0.3∼3.1dB higher than those of BM3D-prGAMP.
Real-time weather radar imaging technology is required for generating short-time weather forecasts. Moreover, such technology plays an important role in critical-weather warning systems that are based on vast Doppler weather radar data. In this study, we propose a weather radar imaging method that uses multi-layer contour detection and segmentation based on MAP-MRF estimation. The proposed method consists of three major steps. The first step involves generating reflectivity and velocity data using the Doppler radar in the form of raw data images of sweep unit in the polar coordinate system. Then, contour lines are detected on multi-layers using the adaptive median filter and modified Canny's detector based on curvature consistency. The second step interpolates contours on the Cartesian coordinate system using 3D scattered data interpolation and then segments the contours based on MAP-MRF prediction and the metropolis algorithm for each layer. The final step involves integrating the segmented contour layers and generating PPI images in sweep units. Experimental results show that the proposed method produces a visually improved PPI image in 45% of the time as compared to that for conventional methods.
Yang LI Zhuang MIAO Ming HE Yafei ZHANG Hang LI
How to represent images into highly compact binary codes is a critical issue in many computer vision tasks. Existing deep hashing methods typically focus on designing loss function by using pairwise or triplet labels. However, these methods ignore the attention mechanism in the human visual system. In this letter, we propose a novel Deep Attention Residual Hashing (DARH) method, which directly learns hash codes based on a simple pointwise classification loss function. Compared to previous methods, our method does not need to generate all possible pairwise or triplet labels from the training dataset. Specifically, we develop a new type of attention layer which can learn human eye fixation and significantly improves the representation ability of hash codes. In addition, we embedded the attention layer into the residual network to simultaneously learn discriminative image features and hash codes in an end-to-end manner. Extensive experiments on standard benchmarks demonstrate that our method preserves the instance-level similarity and outperforms state-of-the-art deep hashing methods in the image retrieval application.
The song-level feature summarization is an essential building block for browsing, retrieval, and indexing of digital music. This paper proposes a local pooling method to aggregate the feature vectors of a song over the universal background model. Two types of local activation patterns of feature vectors are derived; one representation is derived in the form of histogram, and the other is given by a binary vector. Experiments over three publicly-available music datasets show that the proposed local aggregation of the auditory features is promising for music-similarity computation.
Yang LI Zhuang MIAO Jiabao WANG Yafei ZHANG Hang LI
The latest deep hashing methods perform hash codes learning and image feature learning simultaneously by using pairwise or triplet labels. However, generating all possible pairwise or triplet labels from the training dataset can quickly become intractable, where the majority of those samples may produce small costs, resulting in slow convergence. In this letter, we propose a novel deep discriminative supervised hashing method, called DDSH, which directly learns hash codes based on a new combined loss function. Compared to previous methods, our method can take full advantages of the annotated data in terms of pairwise similarity and image identities. Extensive experiments on standard benchmarks demonstrate that our method preserves the instance-level similarity and outperforms state-of-the-art deep hashing methods in the image retrieval application. Remarkably, our 16-bits binary representation can surpass the performance of existing 48-bits binary representation, which demonstrates that our method can effectively improve the speed and precision of large scale image retrieval systems.
Toshihiko WAKAHARA Toshitaka MAKI Noriyasu YAMAMOTO Akihisa KODATE Manabu OKAMOTO Hiroyuki NISHI
The Life Intelligence and Office Information System (LOIS) Technical Committee of the Institute of Electronics, Information and Communication Engineers (IEICE) dates its origin to May 1986. This Technical Committee (TC) has covered the research fields of the office related systems for more than 30 years. Over this time, this TC, under its multiple name changes, has served as a forum for research and provided many opportunities for not only office users but also ordinary users of systems and services to present ideas and discussions. Therefore, these advanced technologies have been diffused from big enterprises to small companies and home users responsible for their management and operation. This paper sums up the technology trends and views of the office related systems and services covered in the 30 years of presentations of the LOIS Technical Committees by using the new literature analysis system based on the IEICE Knowledge Discovery system (I-Scover system).
Yuichi YOSHIDA Tsuyoshi TOYOFUKU
Descriptor aggregation techniques such as the Fisher vector and vector of locally aggregated descriptors (VLAD) are used in most image retrieval frameworks. It takes some time to extract local descriptors, and the geometric verification requires storage if a real-valued descriptor such as SIFT is used. Moreover, if we apply binary descriptors to such a framework, the performance of image retrieval is not better than if we use a real-valued descriptor. Our approach tackles these issues by using a dual representation descriptor that has advantages of being both a real-valued and a binary descriptor. The real value of the dual representation descriptor is aggregated into a VLAD in order to achieve high accuracy in the image retrieval, and the binary one is used to find correspondences in the geometric verification stage in order to reduce the amount of storage needed. We implemented a dual representation descriptor extracted in semi-real time by using the CARD descriptor. We evaluated the accuracy of our image retrieval framework including the geometric verification on three datasets (holidays, ukbench and Stanford mobile visual search). The results indicate that our framework is as accurate as the framework that uses SIFT. In addition, the experiments show that the image retrieval speed and storage requirements of our framework are as efficient as those of a framework that uses ORB.
Miki HASEYAMA Takahiro OGAWA Sho TAKAHASHI Shuhei NOMURA Masatsugu SHIMOMURA
Biomimetics is a new research field that creates innovation through the collaboration of different existing research fields. However, the collaboration, i.e., the exchange of deep knowledge between different research fields, is difficult for several reasons such as differences in technical terms used in different fields. In order to overcome this problem, we have developed a new retrieval platform, “Biomimetics image retrieval platform,” using a visualization-based image retrieval technique. A biological database contains a large volume of image data, and by taking advantage of these image data, we are able to overcome limitations of text-only information retrieval. By realizing such a retrieval platform that does not depend on technical terms, individual biological databases of various species can be integrated. This will allow not only the use of data for the study of various species by researchers in different biological fields but also access for a wide range of researchers in fields ranging from materials science, mechanical engineering and manufacturing. Therefore, our platform provides a new path bridging different fields and will contribute to the development of biomimetics since it can overcome the limitation of the traditional retrieval platform.
Automatic music transcription from audio has long been one of the most intriguing problems and a challenge in the field of music information retrieval, because it requires a series of low-level tasks such as onset/offset detection and F0 estimation, followed by high-level post-processing for symbolic representation. In this paper, a comprehensive transcription system for monophonic singing voice based on harmonic structure analysis is proposed. Given a precise tracking of the fundamental frequency, a novel acoustic feature is derived to signify the harmonic structure in singing voice signals, regardless of the loudness and pitch. It is then used to generate a parametric mixture model based on the von Mises-Fisher distribution, so that the model represents the intrinsic harmonic structures within a region of smoothly connected notes. To identify the note boundaries, the local homogeneity in the harmonic structure is exploited by two different methods: the self-similarity analysis and hidden Markov model. The proposed system identifies the note attributes including the onset time, duration and note pitch. Evaluations are conducted from various aspects to verify the performance improvement of the proposed system and its robustness, using the latest evaluation methodology for singing transcription. The results show that the proposed system significantly outperforms other systems including the state-of-the-art systems.
The translation based language model (TRLM) is state-of-the-art method to solve the lexical gap problem of the question retrieval in the community-based question answering (cQA). Some researchers tried to find methods for solving the lexical gap and improving the TRLM. In this paper, we propose a new dependency based model (DM) for the question retrieval. We explore how to utilize the results of a dependency parser for cQA. Dependency bigrams are extracted from the dependency parser and the language model is transformed using the dependency bigrams as bigram features. As a result, we obtain the significant improved performances when TRLM and DM approaches are effectively combined.
Abu Nowshed CHY Md Zia ULLAH Masaki AONO
Microblog, especially twitter, has become an integral part of our daily life for searching latest news and events information. Due to the short length characteristics of tweets and frequent use of unconventional abbreviations, content-relevance based search cannot satisfy user's information need. Recent research has shown that considering temporal and contextual aspects in this regard has improved the retrieval performance significantly. In this paper, we focus on microblog retrieval, emphasizing the alleviation of the vocabulary mismatch, and the leverage of the temporal (e.g., recency and burst nature) and contextual characteristics of tweets. To address the temporal and contextual aspect of tweets, we propose new features based on query-tweet time, word embedding, and query-tweet sentiment correlation. We also introduce some popularity features to estimate the importance of a tweet. A three-stage query expansion technique is applied to improve the relevancy of tweets. Moreover, to determine the temporal and sentiment sensitivity of a query, we introduce query type determination techniques. After supervised feature selection, we apply random forest as a feature ranking method to estimate the importance of selected features. Then, we make use of ensemble of learning to rank (L2R) framework to estimate the relevance of query-tweet pair. We conducted experiments on TREC Microblog 2011 and 2012 test collections over the TREC Tweets2011 corpus. Experimental results demonstrate the effectiveness of our method over the baseline and known related works in terms of precision at 30 (P@30), mean average precision (MAP), normalized discounted cumulative gain at 30 (NDCG@30), and R-precision (R-Prec) metrics.
Go IRIE Hiroyuki ARAI Yukinobu TANIGUCHI
This paper presents an unsupervised approach to feature binary coding for efficient semantic image retrieval. Although the majority of the existing methods aim to preserve neighborhood structures of the feature space, semantically similar images are not always in such neighbors but are rather distributed in non-linear low-dimensional manifolds. Moreover, images are rarely alone on the Internet and are often surrounded by text data such as tags, attributes, and captions, which tend to carry rich semantic information about the images. On the basis of these observations, the approach presented in this paper aims at learning binary codes for semantic image retrieval using multimodal information sources while preserving the essential low-dimensional structures of the data distributions in the Hamming space. Specifically, after finding the low-dimensional structures of the data by using an unsupervised sparse coding technique, our approach learns a set of linear projections for binary coding by solving an optimization problem which is designed to jointly preserve the extracted data structures and multimodal data correlations between images and texts in the Hamming space as much as possible. We show that the joint optimization problem can readily be transformed into a generalized eigenproblem that can be efficiently solved. Extensive experiments demonstrate that our method yields significant performance gains over several existing methods.
Leilei KONG Zhimao LU Zhongyuan HAN Haoliang QI
This paper addresses the issue of source retrieval in plagiarism detection. The task of source retrieval is retrieving all plagiarized sources of a suspicious document from a source document corpus whilst minimizing retrieval costs. The classification-based methods achieved the best performance in the current researches of source retrieval. This paper points out that it is more important to cast the problem as ranking and employ learning to rank methods to perform source retrieval. Specially, it employs RankBoost and Ranking SVM to obtain the candidate plagiarism source documents. Experimental results on the dataset of PAN@CLEF 2013 Source Retrieval show that the ranking based methods significantly outperforms the baseline methods based on classification. We argue that considering the source retrieval as a ranking problem is better than a classification problem.