Hisashi KURASAWA Daiji FUKAGAWA Atsuhiro TAKASU Jun ADACHI
When developing an index for a similarity search in metric spaces, how to divide the space for effective search pruning is a fundamental issue. We present Maximal Metric Margin Partitioning (MMMP), a partitioning scheme for similarity search indexes. MMMP divides the data based on its distribution pattern, especially for the boundaries of clusters. A partitioning boundary created by MMMP is likely to be located in a sparse area between clusters. Moreover, the partitioning boundary is at maximum distances from the two cluster edges. We also present an indexing scheme, named the MMMP-Index, which uses MMMP and pivot filtering. The MMMP-Index can prune many objects that are not relevant to a query, and it reduces the query execution cost. Our experimental results show that MMMP effectively indexes clustered data and reduces the search cost. For clustered data in a vector space, the MMMP-Index reduces the computational cost to less than two thirds that of comparable schemes.
Sopon PHUMEECHANYA Charnchai PLUEMPITIWIRIYAWEJ Saowapak THONGVIGITMANEE
In this paper, we propose a novel active contour method for image segmentation using a local regional information on extendable search line. We call it the LRES active contour. Our active contour uses the intensity values along a set of search lines that are perpendicular to the contour front. These search lines are used to inform the contour front toward which direction to move in order to find the object's boundary. Unlike other methods, none of these search lines have a predetermined length. Instead, their length increases gradually until a boundary of the object is found. We compare the performance of our LRES active contour to other existing active contours, both edge-based and region-based. The results show that our method provides more desirable segmentation outcomes, particularly on some images where other methods may fail. Not only is our method robust to noise and able to reach into a deep concave shape, it also has a large capture range and performs well in segmenting heterogeneous textured objects.
Keita IMADA Katsuhiko NAKAMURA
This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called "bridging" based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.
An improved bidirectional search algorithm for computing the weight spectrum of convolutional codes is presented. This algorithm does not employ the column distance function of a code which plays an important role in the original bidirectional search algorithm. We show the proposed algorithm can reduce computaion time for obtaining the weigth spectrum of convolutional codes significantly compared with that of the bidirectional search algorithm.
Md. Rakib HASSAN Md. Monirul ISLAM Kazuyuki MURASE
Ant Colony Optimization (ACO) algorithms are a new branch of swarm intelligence. They have been applied to solve different combinatorial optimization problems successfully. Their performance is very promising when they solve small problem instances. However, the algorithms' time complexity increase and solution quality decrease for large problem instances. So, it is crucial to reduce the time requirement and at the same time to increase the solution quality for solving large combinatorial optimization problems by the ACO algorithms. This paper introduces a Local Search based ACO algorithm (LSACO), a new algorithm to solve large combinatorial optimization problems. The basis of LSACO is to apply an adaptive local search method to improve the solution quality. This local search automatically determines the number of edges to exchange during the execution of the algorithm. LSACO also applies pheromone updating rule and constructs solutions in a new way so as to decrease the convergence time. The performance of LSACO has been evaluated on a number of benchmark combinatorial optimization problems and results are compared with several existing ACO algorithms. Experimental results show that LSACO is able to produce good quality solutions with a higher rate of convergence for most of the problems.
Shigeto TAJIMA Nobuo FUNABIKI Teruo HIGASHINO
Wireless mesh networks have been extensively studied as expandable, flexible, and inexpensive access networks to the Internet. This paper focuses on one composed of multiple access points (APs) connected through multihop wireless communications mainly by the wireless distribution system (WDS). For scalability, the proper partition of APs into multiple WDS clusters is essential, because the number of APs in one cluster is limited due to the increasing radio interference and control packets. In this paper, we formulate this WDS clustering problem and prove the NP-completeness of its decision version through reduction from a known NP-complete problem. Then, we propose its heuristic algorithm, using a greedy method and a variable depth search method, to satisfy the complex constraints while optimizing the cost function. We verify the effectiveness of our algorithm through extensive simulations, where the results confirm its superiority to the existing algorithm in terms of throughput.
Ithipan METHASATE Thanaruk THEERAMUNKONG
Finding a kernel mapping function for support vector machines (SVMs) is a key step towards construction of a high-performanced SVM-based classifier. While some recent methods exploited an evolutional approach to construct a suitable multifunction kernel, most of them searched randomly and diversely. In this paper, the concept of a family of identical-structured kernel trees is proposed to enable exploration of structure space using genetic programming whereas to pursue investigation of parameter space on a certain tree using evolution strategy. To control balance between structure and parameter search towards an optimal kernel, simulated annealing is introduced. By experiments on a number of benchmark datasets in the UCI and text classification collection, the proposed method is shown to be able to find a better optimal solution than other search methods, including grid search and gradient search.
Hyuntae PARK Hyunjin KIM Hong-Sik KIM Sungho KANG
This letter proposes a fast IP address lookup algorithm based on search space reduction. Prefixes are classified into three types according to the nesting relationship and a large forwarding table is partitioned into multiple small trees. As a result, the search space is reduced. The results of analyses and experiments show that the proposed method offers higher lookup and updating speeds along with reduced memory requirements.
Shinpei HAYASHI Yasuyuki TSUDA Motoshi SAEKI
This paper proposes a technique for detecting the occurrences of refactoring from source code revisions. In a real software development process, a refactoring operation may sometimes be performed together with other modifications at the same revision. This means that detecting refactorings from the differences between two versions stored in a software version archive is not usually an easy process. In order to detect these impure refactorings, we model the detection within a graph search. Our technique considers a version of a program as a state and a refactoring as a transition between two states. It then searches for the path that approaches from the initial to the final state. To improve the efficiency of the search, we use the source code differences between the current and the final state for choosing the candidates of refactoring to be applied next and estimating the heuristic distance to the final state. Through case studies, we show that our approach is feasible to detect combinations of refactorings.
Koichi SHIMIZU Daisuke SUZUKI Toyohiro TSURUMARU
We propose an FPGA-based high-speed search system for cryptosystems that employ a passphrase-based security scheme. We first choose PGP as an example of such cryptosystems, clear several hurdles for high throughputs and manage to develop a high-speed search system for it. As a result we achieve a throughput of 1.1 105 passphrases per second, which is 38 times the speed of the fastest software. Furthermore we can do many flexible passphrase generations in addition to a simple brute force one because we assign the passphrase generation operation to software. In fact we implement a brute force and a dictionary-based ones, and get the same maximum throughput as above in both cases. We next consider the speed of passphrase generation in order to apply our system to other cryptosystems than PGP, and implement a hardware passphrase generator to achieve higher throughputs. In the PGP case, the very heavy iteration of hashing, 1025 times in our case, lowers the total throughput linearly, and makes the figure 1.1 105 suffice. In other cases without any such iteration structure, we have to generate even more passphrases, for example 108 per second. That can easily exceed the generation speed that software can offer and thus we conclude that it is now necessary to place the passphrase generation in hardware instead of in software.
Yanni ZHAO Jinian BIAN Shujun DENG Zhiqiu KONG Kang ZHAO
Despite the growing research effort in formal verification, industrial verification often relies on the constrained random simulation methodology, which is supported by constraint solvers as the stimulus generator integrated within simulator, especially for the large design with complex constraints nowadays. These stimulus generators need to be fast and well-distributed to maintain simulation performance. In this paper, we propose a dynamic method to guide stimulus generation by SAT solvers. An adjusting strategy named Tabu Search with Memory (TSwM) is integrated in the stimulus generator for the search and prune processes along with the constraint solver. Experimental results show that the method proposed in this paper could generate well-distributed stimuli with good performance.
OFDM (Orthogonal Frequency Division Multiplexing) is widely used in wideband wireless communication systems due to its excellent performance. One of the most important operations in OFDM receivers is preamble detection. This paper addresses a general form of extended differential detection methods, which is a combination of differential detection and a moving average filter. This paper also presents a filter size determination method that achieves satisfactory performance in various channel environments.
Katsuya MASUDA Jun'ichi TSUJII
This paper presents algorithms for searching text regions with specifying annotated information in tag-annotated text by using Region Algebra. The original algebra and its efficient algorithms are extended to handle both nested regions and crossed regions. The extensions are necessary for text search by using rich linguistic annotations. We first assign a depth number to every nested tag region to order these regions and write efficient algorithms using the depth number for the containment operations which can treat nested tag regions. Next, we introduce variables for attribute values of tags into the algebra to treat annotations in which attributes indicate another tag regions, and propose an efficient method of treating re-entrancy by incrementally determining values for variables. Our algorithms have been implemented in a text search engine for MEDLINE, which is a large textbase of abstracts in medical science. Experiments in tag-annotated MEDLINE abstracts demonstrate the effectiveness of specifying annotations and the efficiency of our algorithms. The system is made publicly accessible at http://www-tsujii.is.s.u-tokyo.ac.jp/medie/.
Sho SHIMIZU Hiroyuki ISHIKAWA Yutaka ARAKAWA Naoaki YAMANAKA Kosuke SHIBA
How to minimize the number of mirroring resources under a QoS constraint (resource minimization problem) is an important issue in content delivery networks. This paper proposes a novel approach that takes advantage of the parallelism of dynamically reconfigurable processors (DRPs) to solve the resource minimization problem, which is NP-hard. Our proposal obtains the optimal solution by running an exhaustive search algorithm suitable for DRP. Greedy algorithms, which have been widely studied for tackling the resource minimization problem, cannot always obtain the optimal solution. The proposed method is implemented on an actual DRP and in experiments reduces the execution time by a factor of 40 compared to the conventional exhaustive search algorithm on a Pentium 4 (2.8 GHz).
Kuo-Ming HUNG Yen-Liang CHEN Ching-Tang HSIEH
This paper proposes a novel image inpainting method based on bandelet transform. This technique is based on a multi-resolution layer to perform image restoration, and mainly utilizes the geometrical flow of the neighboring texture of the damaged regions as the basis of restoration. By performing the warp transform with geometrical flows, it transforms the textural variation into the nearing domain axis utilizing the bandelet decomposition method to decompose the non-relative textures into different bands, and then combines them with the affine search method to perform image restoration. The experimental results show that the proposed method can simplify the complexity of the repair decision method and improve the quality of HVS, and thus, repaired results to contain the image of contour of high change, and in addition, offer a texture image of high-frequency variation. These repair results can lead to state-of-the-art results.
Hung-Min SUN Mu-En WU Cheng-Ta YANG
This investigation proposes two methods for embedding backdoors in the RSA modulus N=pq rather than in the public exponent e. This strategy not only permits manufacturers to embed backdoors in an RSA system, but also allows users to choose any desired public exponent, such as e=216+1, to ensure efficient encryption. This work utilizes lattice attack and exhaustive attack to embed backdoors in two proposed methods, called RSASBLT and RSASBES, respectively. Both approaches involve straightforward steps, making their running time roughly the same as that of normal RSA key-generation time, implying that no one can detect the backdoor by observing time imparity.
Satoshi NAGATA Motohiro TANNO Yoshihisa KISHIYAMA Kenichi HIGUCHI Mamoru SAWAHASHI
This paper presents a comparison of hierarchical and non-hierarchical synchronization channel (SCH) structures in terms of the initial cell search time and neighboring cell search time in order to establish the optimum SCH structure in the Evolved UTRA downlink. Computer simulation results show that in a 19-cell configuration, the cell search time at 90% in the cumulative distribution function (CDF) using the hierarchical SCH structure is less than half that using the non-hierarchical SCH structure in a neighboring cell search under low signal-to-interference plus noise power ratio (SINR) conditions, although both structures achieve almost the same cell search time in the initial cell search. This is due to the cross-correlation based SCH symbol timing detection in the hierarchical SCH structure, which is affected less by noise than the auto-correlation based detection in the non-hierarchical SCH structure. Thus, we conclude that the hierarchical SCH structure is superior to the non-hierarchical SCH structure based on the cell search time performance especially in the neighboring cell search.
Ik Rae JEONG Jeong Ok KWON Dowon HONG Dong Hoon LEE
Searchable encryption has many applications including e-mail systems and storage systems. The usefulness of searchable encryption derives from its support of keyword-testability. Keyword-testability means that a receiver of a ciphertext can test whether the ciphertext contains a specific keyword. Recently, Bellare et al. suggested an efficiently-searchable encryption scheme with keyword-recoverability as well as keyword-testability. Keyword-recoverability means that a receiver can extract the keyword from a ciphertext. All of the previous searchable encryption schemes have provided only keyword-testability. However, as explained by Bellare et al., no efficiently-searchable encryption scheme can provide even security against chosen keyword attacks. That is, Bellare et al.'s scheme assumes that no useful partial information about the keyword is known to the adversaries. In this paper, we suggest an SEKR (searchable encryption with keyword-recoverability) scheme which is secure even if the adversaries have any useful partial information about the keyword. Our scheme provides security against chosen ciphertext attacks which are stronger attacks than chosen keyword attacks. We also suggest an SEKR scheme for multi-keywords.
Umaporn SUPASITTHIMETHEE Toshiyuki SHIMIZU Masatoshi YOSHIKAWA Kriengkrai PORKAEW
One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.
Quantum computations have so far proved to be more powerful than classical computations, but quantum computers still have not been put into practical use due to several technical issues. One of the most serious problems for realizing quantum computers is decoherence that occurs inevitably since our apparatus are surrounded with environment and open systems. In this paper, we give some surveys on a variety of effects of decoherence in quantum algorithms such as Grover's database search and quantum walks, and we show how quantum algorithms work under decoherence, how sensitive they are against decoherence, and how to implement a robust quantum circuit.