The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] search(415hit)

41-60hit(415hit)

  • Video Search Reranking with Relevance Feedback Using Visual and Textual Similarities

    Takamasa FUJII  Soh YOSHIDA  Mitsuji MUNEYASU  

     
    PAPER-Multimedia Environment Technology

      Vol:
    E102-A No:12
      Page(s):
    1900-1909

    In video search reranking, in addition to the well-known semantic gap, the intent gap, which is the gap between the representation of the users' demand and the real search intention, is becoming a major problem restricting the improvement of reranking performance. To address this problem, we propose video search reranking based on a semantic representation by multiple tags. In the proposed method, we use relevance feedback, which the user can interact with by specifying some example videos from the initial search results. We apply the relevance feedback to reduce the gap between the real intent of the users and the video search results. In addition, we focus on the fact that multiple tags are used to represent video contents. By vectorizing multiple tags associated with videos on the basis of the Word2Vec algorithm and calculating the centroid of the tag vector as a collective representation, we can evaluate the semantic similarity between videos by using tag features. We conduct experiments on the YouTube-8M dataset, and the results show that our reranking approach is effective and efficient.

  • Enhancing the Performance of Cuckoo Search Algorithm with Multi-Learning Strategies Open Access

    Li HUANG  Xiao ZHENG  Shuai DING  Zhi LIU  Jun HUANG  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2019/07/09
      Vol:
    E102-D No:10
      Page(s):
    1916-1924

    The Cuckoo Search (CS) is apt to be trapped in local optimum relating to complex target functions. This drawback has been recognized as the bottleneck of its widespread use. This paper, with the purpose of improving CS, puts forward a Cuckoo Search algorithm featuring Multi-Learning Strategies (LSCS). In LSCS, the Converted Learning Module, which features the Comprehensive Learning Strategy and Optimal Learning Strategy, tries to make a coordinated cooperation between exploration and exploitation, and the switching in this part is decided by the transition probability Pc. When the nest fails to be renewed after m iterations, the Elite Learning Perturbation Module provides extra diversity for the current nest, and it can avoid stagnation. The Boundary Handling Approach adjusted by Gauss map is utilized to reset the location of nest beyond the boundary. The proposed algorithm is evaluated by two different tests: Test Group A(ten simple unimodal and multimodal functions) and Test Group B(the CEC2013 test suite). Experiments results show that LSCS demonstrates significant advantages in terms of convergence speed and optimization capability in solving complex problems.

  • On the Competitive Analysis for the Multi-Objective Time Series Search Problem

    Toshiya ITOH  Yoshinori TAKEI  

     
    PAPER-Optimization

      Vol:
    E102-A No:9
      Page(s):
    1150-1158

    For the multi-objective time series search problem, Hasegawa and Itoh [Theoretical Computer Science, Vol.78, pp.58-66, 2018] presented the best possible online algorithm balanced price policy for any monotone function f:Rk→R. Specifically the competitive ratio with respect to the monotone function f(c1,...,ck)=(c1+…+ck)/k is referred to as the arithmetic mean component competitive ratio. Hasegawa and Itoh derived the explicit representation of the arithmetic mean component competitive ratio for k=2, but it has not been known for any integer k≥3. In this paper, we derive the explicit representations of the arithmetic mean component competitive ratio for k=3 and k=4, respectively. On the other hand, we show that it is computationally difficult to derive the explicit representation of the arithmetic mean component competitive ratio for arbitrary integer k in a way similar to the cases for k=2, 3, and 4.

  • Physical Cell ID Detection Probabilities Using Frequency Domain PVS Transmit Diversity for NB-IoT Radio Interface

    Aya SHIMURA  Mamoru SAWAHASHI  Satoshi NAGATA  Yoshihisa KISHIYAMA  

     
    PAPER

      Pubricized:
    2019/02/20
      Vol:
    E102-B No:8
      Page(s):
    1477-1489

    This paper proposes frequency domain precoding vector switching (PVS) transmit diversity for synchronization signals to achieve fast physical cell identity (PCID) detection for the narrowband (NB)-Internet-of-Things (IoT) radio interface. More specifically, we propose localized and distributed frequency domain PVS transmit diversity schemes for the narrowband primary synchronization signal (NPSS) and narrowband secondary synchronization signal (NSSS), and NPSS and NSSS detection methods including a frequency offset estimation method suitable for frequency domain PVS transmit diversity at the receiver in a set of user equipment (UE). We conduct link-level simulations to compare the detection probabilities of NPSS and NSSS, i.e., PCID using the proposed frequency domain PVS transmit diversity schemes, to those using the conventional time domain PVS transmit diversity scheme. The results show that both the distributed and localized frequency domain PVS transmit diversity schemes achieve a PCID detection probability almost identical to that of the time domain PVS transmit diversity scheme when the effect of the frequency offset due to the frequency error of the UE temperature compensated crystal oscillator (TCXO) is not considered. We also show that for a maximum frequency offset of less than approximately 8 kHz, localized PVS transmit diversity achieves almost the same PCID detection probability. It also achieves a higher PCID detection probability than one-antenna transmission although it is degraded compared to the time domain PVS transmit diversity when the maximum frequency offset is greater than approximately 10 kHz.

  • Characterizing Link-2 LR-Visibility Polygons and Related Problems

    Xuehou TAN  Bo JIANG  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E102-A No:2
      Page(s):
    423-429

    Two points x, y inside a simple polygon P are said to be mutually link-2 visible if there exists the third point z ∈ P such that z is visible from both x and y. The polygon P is link-2 LR-visible if there are two points s, t on the boundary of P such that every point on the clockwise boundary of P from s to t is link-2 visible from some point of the other boundary of P from t to s and vice versa. We give a characterization of link-2 LR-visibility polygons by generalizing the known result on LR-visibility polygons. A new idea is to extend the concepts of ray-shootings and components to those under notion of link-2 visibility. Then, we develop an O(n log n) time algorithm to determine whether a given polygon is link-2 LR-visible. Using the characterization of link-2 LR-visibility polygons, we further present an O(n log n) time algorithm for determining whether a polygonal region is searchable by a k-searcher, k ≥ 2. This improves upon the previous O(n2) time bound [9]. A polygonal region P is said to be searchable by a searcher if the searcher can detect (or see) an unpredictable intruder inside the region, no matter how fast the intruder moves. A k-searcher holds k flashlights and can see only along the rays of the flashlights emanating from his position.

  • No-Dictionary Searchable Symmetric Encryption Open Access

    Wakaha OGATA  Kaoru KUROSAWA  

     
    PAPER

      Vol:
    E102-A No:1
      Page(s):
    114-124

    In the model of no-dictionary searchable symmetric encryption (SSE) schemes, the client does not need to keep the list of keywords W. In this paper, we first show a generic method to transform any passively secure SSE scheme to a no-dictionary SSE scheme such that the client can verify search results even if w ∉ W. In particular, it takes only O(1) time for the server to prove that w ∉ W. We next present a no-dictionary SSE scheme such that the client can hide even the search pattern from the server.

  • A New DY Conjugate Gradient Method and Applications to Image Denoising

    Wei XUE  Junhong REN  Xiao ZHENG  Zhi LIU  Yueyong LIANG  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/09/14
      Vol:
    E101-D No:12
      Page(s):
    2984-2990

    Dai-Yuan (DY) conjugate gradient method is an effective method for solving large-scale unconstrained optimization problems. In this paper, a new DY method, possessing a spectral conjugate parameter βk, is presented. An attractive property of the proposed method is that the search direction generated at each iteration is descent, which is independent of the line search. Global convergence of the proposed method is also established when strong Wolfe conditions are employed. Finally, comparison experiments on impulse noise removal are reported to demonstrate the effectiveness of the proposed method.

  • Highly Efficient Mobile Visual Search Algorithm

    Chuang ZHU  Xiao Feng HUANG  Guo Qing XIANG  Hui Hui DONG  Jia Wen SONG  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2018/09/14
      Vol:
    E101-D No:12
      Page(s):
    3073-3082

    In this paper, we propose a highly efficient mobile visual search algorithm. For descriptor extraction process, we propose a low complexity feature detection which utilizes the detected local key points of the coarse octaves to guide the scale space construction and feature detection in the fine octave. The Gaussian and Laplacian operations are skipped for the unimportant area, and thus the computing time is saved. Besides, feature selection is placed before orientation computing to further reduce the complexity of feature detection by pre-discarding some unimportant local points. For the image retrieval process, we design a high-performance reranking method, which merges both the global descriptor matching score and the local descriptor similarity score (LDSS). In the calculating of LDSS, the tf-idf weighted histogram matching is performed to integrate the statistical information of the database. The results show that the proposed highly efficient approach achieves comparable performance with the state-of-the-art for mobile visual search, while the descriptor extraction complexity is largely reduced.

  • Local Feature Reliability Measure Consistent with Match Conditions for Mobile Visual Search

    Kohei MATSUZAKI  Kazuyuki TASAKA  Hiromasa YANAGIHARA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/09/12
      Vol:
    E101-D No:12
      Page(s):
    3170-3180

    We propose a feature design method for a mobile visual search based on binary features and a bag-of-visual words framework. In mobile visual search, detection error and quantization error are unavoidable due to viewpoint changes and cause performance degradation. Typical approaches to visual search extract features from a single view of reference images, though such features are insufficient to manage detection and quantization errors. In this paper, we extract features from multiview synthetic images. These features are selected according to our novel reliability measure which enables robust recognition against various viewpoint changes. We regard feature selection as a maximum coverage problem. That is, we find a finite set of features maximizing an objective function under certain constraints. As this problem is NP-hard and thus computationally infeasible, we explore approximate solutions based on a greedy algorithm. For this purpose, we propose novel constraint functions which are designed to be consistent with the match conditions in the visual search method. Experiments show that the proposed method improves retrieval accuracy by 12.7 percentage points without increasing the database size or changing the search procedure. In other words, the proposed method enables more accurate search without adversely affecting the database size, computational cost, and memory requirement.

  • Speeding up Extreme Multi-Label Classifier by Approximate Nearest Neighbor Search

    Yukihiro TAGAMI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/08/06
      Vol:
    E101-D No:11
      Page(s):
    2784-2794

    Extreme multi-label classification methods have been widely used in Web-scale classification tasks such as Web page tagging and product recommendation. In this paper, we present a novel graph embedding method called “AnnexML”. At the training step, AnnexML constructs a k-nearest neighbor graph of label vectors and attempts to reproduce the graph structure in the embedding space. The prediction is efficiently performed by using an approximate nearest neighbor search method that efficiently explores the learned k-nearest neighbor graph in the embedding space. We conducted evaluations on several large-scale real-world data sets and compared our method with recent state-of-the-art methods. Experimental results show that our AnnexML can significantly improve prediction accuracy, especially on data sets that have a larger label space. In addition, AnnexML improves the trade-off between prediction time and accuracy. At the same level of accuracy, the prediction time of AnnexML was up to 58 times faster than that of SLEEC, a state-of-the-art embedding-based method.

  • Studying the Cost and Effectiveness of OSS Quality Assessment Models: An Experience Report of Fujitsu QNET

    Yasutaka KAMEI  Takahiro MATSUMOTO  Kazuhiro YAMASHITA  Naoyasu UBAYASHI  Takashi IWASAKI  Shuichi TAKAYAMA  

     
    PAPER-Software Engineering

      Pubricized:
    2018/08/08
      Vol:
    E101-D No:11
      Page(s):
    2744-2753

    Nowadays, open source software (OSS) systems are adopted by proprietary software projects. To reduce the risk of using problematic OSS systems (e.g., causing system crashes), it is important for proprietary software projects to assess OSS systems in advance. Therefore, OSS quality assessment models are studied to obtain information regarding the quality of OSS systems. Although the OSS quality assessment models are partially validated using a small number of case studies, to the best of our knowledge, there are few studies that empirically report how industrial projects actually use OSS quality assessment models in their own development process. In this study, we empirically evaluate the cost and effectiveness of OSS quality assessment models at Fujitsu Kyushu Network Technologies Limited (Fujitsu QNET). To conduct the empirical study, we collect datasets from (a) 120 OSS projects that Fujitsu QNET's projects actually used and (b) 10 problematic OSS projects that caused major problems in the projects. We find that (1) it takes average and median times of 51 and 49 minutes, respectively, to gather all assessment metrics per OSS project and (2) there is a possibility that we can filter problematic OSS systems by using the threshold derived from a pool of assessment metrics. Fujitsu QNET's developers agree that our results lead to improvements in Fujitsu QNET's OSS assessment process. We believe that our work significantly contributes to the empirical knowledge about applying OSS assessment techniques to industrial projects.

  • Search-Based Concolic Execution for SW Vulnerability Discovery

    Rustamov FAYOZBEK  Minjun CHOI  Joobeom YUN  

     
    LETTER-Data Engineering, Web Information Systems

      Pubricized:
    2018/07/02
      Vol:
    E101-D No:10
      Page(s):
    2526-2529

    Huge amounts of software appear nowadays. The more the number of software increases, the more increased software vulnerabilities are. Although some automatic methods have been proposed in order to detect and remove software vulnerabilities, they still require a lot of time so they have a limitation in the real world. To solve this problem, we propose BugHunter which automatically tests a binary file compiled with a C++ compiler. It searches for unsafe API calls and automatically executes to the program block that have an unsafe API call. Also, we showed that BugHunter is more efficient than angr through experiments. As a result, BugHunter is very helpful to find a software vulnerability in a short time.

  • Enumerating All Spanning Shortest Path Forests with Distance and Capacity Constraints

    Yu NAKAHATA  Jun KAWAHARA  Takashi HORIYAMA  Shoji KASAHARA  

     
    PAPER

      Vol:
    E101-A No:9
      Page(s):
    1363-1374

    This paper studies a variant of the graph partitioning problem, called the evacuation planning problem, which asks us to partition a target area, represented by a graph, into several regions so that each region contains exactly one shelter. Each region must be convex to reduce intersections of evacuation routes, the distance between each point to a shelter must be bounded so that inhabitants can quickly evacuate from a disaster, and the number of inhabitants assigned to each shelter must not exceed the capacity of the shelter. This paper formulates the convexity of connected components as a spanning shortest path forest for general graphs, and proposes a novel algorithm to tackle this multi-objective optimization problem. The algorithm not only obtains a single partition but also enumerates all partitions simultaneously satisfying the above complex constraints, which is difficult to be treated by existing algorithms, using zero-suppressed binary decision diagrams (ZDDs) as a compressed expression. The efficiency of the proposed algorithm is confirmed by the experiments using real-world map data. The results of the experiments show that the proposed algorithm can obtain hundreds of millions of partitions satisfying all the constraints for input graphs with a hundred of edges in a few minutes.

  • Entity Ranking for Queries with Modifiers Based on Knowledge Bases and Web Search Results

    Wiradee IMRATTANATRAI  Makoto P. KATO  Katsumi TANAKA  Masatoshi YOSHIKAWA  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2018/06/18
      Vol:
    E101-D No:9
      Page(s):
    2279-2290

    This paper proposes methods of finding a ranked list of entities for a given query (e.g. “Kennin-ji”, “Tenryu-ji”, or “Kinkaku-ji” for the query “ancient zen buddhist temples in kyoto”) by leveraging different types of modifiers in the query through identifying corresponding properties (e.g. established date and location for the modifiers “ancient” and “kyoto”, respectively). While most major search engines provide the entity search functionality that returns a list of entities based on users' queries, entities are neither presented for a wide variety of search queries, nor in the order that users expect. To enhance the effectiveness of entity search, we propose two entity ranking methods. Our first proposed method is a Web-based entity ranking that directly finds relevant entities from Web search results returned in response to the query as a whole, and propagates the estimated relevance to the other entities. The second proposed method is a property-based entity ranking that ranks entities based on properties corresponding to modifiers in the query. To this end, we propose a novel property identification method that identifies a set of relevant properties based on a Support Vector Machine (SVM) using our seven criteria that are effective for different types of modifiers. The experimental results showed that our proposed property identification method could predict more relevant properties than using each of the criteria separately. Moreover, we achieved the best performance for returning a ranked list of relevant entities when using the combination of the Web-based and property-based entity ranking methods.

  • Attribute-Based Keyword Search with Proxy Re-Encryption in the Cloud

    Yanli CHEN  Yuanyuan HU  Minhui ZHU  Geng YANG  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2018/02/16
      Vol:
    E101-B No:8
      Page(s):
    1798-1808

    This work is conducted to solve the current problem in the attribute-based keyword search (ABKS) scheme about how to securely and efficiently delegate the search rights to other users when the authorized user is not online. We first combine proxy re-encryption (PRE) with the ABKS technology and propose a scheme called attribute-based keyword search with proxy re-encryption (PABKS). The scheme not only realizes the functions of data search and fine-grained access control, but also supports search function sharing. In addition, we randomly blind the user's private key to the server, which ensures the confidentiality and security of the private key. Then, we also prove that the scheme is selective access structure and chosen keyword attack (IND-sAS-CKA) secured in the random oracle model. A performance analysis and security proof show that the proposed scheme can achieve efficient and secure data search in the cloud.

  • Multi-Feature Sensor Similarity Search for the Internet of Things

    Suyan LIU  Yuanan LIU  Fan WU  Puning ZHANG  

     
    PAPER-Network

      Pubricized:
    2017/12/08
      Vol:
    E101-B No:6
      Page(s):
    1388-1397

    The tens of billions of devices expected to be connected to the Internet will include so many sensors that the demand for sensor-based services is rising. The task of effectively utilizing the enormous numbers of sensors deployed is daunting. The need for automatic sensor identification has expanded the need for research on sensor similarity searches. The Internet of Things (IoT) features massive non-textual dynamic data, which is raising the critical challenge of efficiently and effectively searching for and selecting the sensors most related to a need. Unfortunately, single-attribute similarity searches are highly inaccurate when searching among similar attribute values. In this paper, we propose a group-fitting correlation calculation algorithm (GFC) that can identify the most similar clusters of sensors. The GFC method considers multiple attributes (e.g., humidity, temperature) to calculate sensor similarity; thus, it performs more accurate searches than do existing solutions.

  • Advanced DBS (Direct-Binary Search) Method for Compensating Spatial Chromatic Errors on RGB Digital Holograms in a Wide-Depth Range with Binary Holograms

    Thibault LEPORTIER  Min-Chul PARK  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:5
      Page(s):
    848-849

    Direct-binary search method has been used for converting complex holograms into binary format. However, this algorithm is optimized to reconstruct monochromatic digital holograms and is accurate only in a narrow-depth range. In this paper, we proposed an advanced direct-binary search method to increase the depth of field of 3D scenes reconstructed in RGB by binary holograms.

  • Graph-Based Video Search Reranking with Local and Global Consistency Analysis

    Soh YOSHIDA  Takahiro OGAWA  Miki HASEYAMA  Mitsuji MUNEYASU  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/01/30
      Vol:
    E101-D No:5
      Page(s):
    1430-1440

    Video reranking is an effective way for improving the retrieval performance of text-based video search engines. This paper proposes a graph-based Web video search reranking method with local and global consistency analysis. Generally, the graph-based reranking approach constructs a graph whose nodes and edges respectively correspond to videos and their pairwise similarities. A lot of reranking methods are built based on a scheme which regularizes the smoothness of pairwise relevance scores between adjacent nodes with regard to a user's query. However, since the overall consistency is measured by aggregating only the local consistency over each pair, errors in score estimation increase when noisy samples are included within query-relevant videos' neighbors. To deal with the noisy samples, the proposed method leverages the global consistency of the graph structure, which is different from the conventional methods. Specifically, in order to detect this consistency, the propose method introduces a spectral clustering algorithm which can detect video groups, in which videos have strong semantic correlation, on the graph. Furthermore, a new regularization term, which smooths ranking scores within the same group, is introduced to the reranking framework. Since the score regularization is performed by both local and global aspects simultaneously, the accurate score estimation becomes feasible. Experimental results obtained by applying the proposed method to a real-world video collection show its effectiveness.

  • Efficient Methods for Aggregate Reverse Rank Queries

    Yuyang DONG  Hanxiong CHEN  Kazutaka FURUSE  Hiroyuki KITAGAWA  

     
    PAPER

      Pubricized:
    2018/01/18
      Vol:
    E101-D No:4
      Page(s):
    1012-1020

    Given two data sets of user preferences and product attributes in addition to a set of query products, the aggregate reverse rank (ARR) query returns top-k users who regard the given query products as the highest aggregate rank than other users. ARR queries are designed to focus on product bundling in marketing. Manufacturers are mostly willing to bundle several products together for the purpose of maximizing benefits or inventory liquidation. This naturally leads to an increase in data on users and products. Thus, the problem of efficiently processing ARR queries become a big issue. In this paper, we reveal two limitations of the state-of-the-art solution to ARR query; that is, (a) It has poor efficiency when the distribution of the query set is dispersive. (b) It has to process a large portion user data. To address these limitations, we develop a cluster-and-process method and a sophisticated indexing strategy. From the theoretical analysis of the results and experimental comparisons, we conclude that our proposals have superior performance.

  • Drift-Free Tracking Surveillance Based on Online Latent Structured SVM and Kalman Filter Modules

    Yung-Yao CHEN  Yi-Cheng ZHANG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/11/14
      Vol:
    E101-D No:2
      Page(s):
    491-503

    Tracking-by-detection methods consider tracking task as a continuous detection problem applied over video frames. Modern tracking-by-detection trackers have online learning ability; the update stage is essential because it determines how to modify the classifier inherent in a tracker. However, most trackers search for the target within a fixed region centered at the previous object position; thus, they lack spatiotemporal consistency. This becomes a problem when the tracker detects an incorrect object during short-term occlusion. In addition, the scale of the bounding box that contains the target object is usually assumed not to change. This assumption is unrealistic for long-term tracking, where the scale of the target varies as the distance between the target and the camera changes. The accumulation of errors resulting from these shortcomings results in the drift problem, i.e. drifting away from the target object. To resolve this problem, we present a drift-free, online learning-based tracking-by-detection method using a single static camera. We improve the latent structured support vector machine (SVM) tracker by designing a more robust tracker update step by incorporating two Kalman filter modules: the first is used to predict an adaptive search region in consideration of the object motion; the second is used to adjust the scale of the bounding box by accounting for the background model. We propose a hierarchical search strategy that combines Bhattacharyya coefficient similarity analysis and Kalman predictors. This strategy facilitates overcoming occlusion and increases tracking efficiency. We evaluate this work using publicly available videos thoroughly. Experimental results show that the proposed method outperforms the state-of-the-art trackers.

41-60hit(415hit)