The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] algorithm(2137hit)

1321-1340hit(2137hit)

  • Normalizing Syntactic Structures Using Part-of-Speech Tags and Binary Rules

    Seongyong KIM  Kong-Joo LEE  Key-Sun CHOI  

     
    PAPER

      Vol:
    E86-D No:10
      Page(s):
    2049-2056

    We propose a normalization scheme of syntactic structures using a binary phrase structure grammar with composite labels. The normalization adopts binary rules so that the dependency between two sub-trees can be represented in the label of the tree. The label of a tree is composed of two attributes, each of which is extracted from each sub-tree, so that it can represent the compositional information of the tree. The composite label is generated from part-of-speech tags using an automatic labelling algorithm. Since the proposed normalization scheme is binary and uses only part-of-speech information, it can readily be used to compare the results of different syntactic analyses independently of their syntactic description and can be applied to other languages as well. It can also be used for syntactic analysis, which performs higher than the previous syntactic description for Korean corpus. We implement a tool that transforms a syntactic description into normalized one based on this proposed scheme. It can help construct a unified syntactic corpus and extract syntactic information from various types of syntactic corpus in a uniform way.

  • Adaptive On-Line Frequency Stabilization System for Laser Diodes Based on Genetic Algorithm

    Shintaro HISATAKE  Naoto HAMAGUCHI  Takahiro KAWAMOTO  Wakao SASAKI  

     
    PAPER-Lasers, Quantum Electronics

      Vol:
    E86-C No:10
      Page(s):
    2097-2102

    We propose a frequency stabilization system for laser diodes (LDs), in which the electrical feedback loop response can be determined using an on-line genetic algorithm (GA) so as to attain lower LD frequency noise power within the specific Fourier frequency range of interest. At the initial stage of the stabilization, the feedback loop response has been controlled through GA, manipulating the proportional gain, integration time, and derivative time of conventional analog PID controller. Individuals having 12-bit chromosomes encoded by combinations of PID parameters have converged evolutionarily toward an optimal solution providing a suitable feedback loop response. A fitness function has been calculated for each individual in real time based on the power spectral density (PSD) of the frequency noise. The performance of this system has been tested by stabilizing a 50 mW visible LD. Long-term (τ > 0.01 s) frequency stability and its repeatability have been improved.

  • An Iterative Decoding Algorithm for Channels with Additive Linear Dynamical Noise

    Tadashi WADAYAMA  

     
    PAPER-Coding Theory

      Vol:
    E86-A No:10
      Page(s):
    2452-2460

    In this paper, an iterative decoding algorithm for channels with additive linear dynamical noise is presented. The proposed algorithm is based on the tightly coupled two inference algorithms: the sum-product algorithm which infers the information symbols of an low density parity check (LDPC) code and the Kalman smoothing algorithm which infers the channel states. The linear dynamical noise are the noise generated from a linear dynamical system. We often encounter such noise (i.e., additive colored noise) in practical communication and storage systems. The conventional iterative decoding algorithms such as the sum-product algorithm cannot derive full potential of turbo codes nor LDPC codes over such a channel because the conventional algorithms are designed under the independence assumption on the noise. Several simulations have been performed to assess the performance of the proposed algorithm. From the simulation results, it can be concluded that the Kalman smoothing algorithm deserves to be implemented in a decoder when the linear dynamical part of the linear dynamical noise is dominant rather than the white Gaussian noise part. In such a case, the performance of the proposed algorithm is far superior to that of the conventional algorithm.

  • A Technique for Constructing Dependable Internet Server Cluster

    Mamoru OHARA  Masayuki ARAI  Satoshi FUKUMOTO  Kazuhiko IWASAKI  

     
    PAPER-Fault Tolerance

      Vol:
    E86-D No:10
      Page(s):
    2198-2208

    An approach is proposed for constructing a dependable server cluster composed only of server nodes with all nodes running the same algorithm. The cluster propagates an IP multicast address as the server address, and clients multicast requests to the cluster. A local proxy running on each client machine enables conventional client software designed for unicasting to communicate with the cluster without having to be modified. Evaluation of a prototype system providing domain name service showed that a cluster using this technique has high dependability with acceptable performance degradation.

  • An Integrated Approach for Implementing Imprecise Computations

    Hidenori KOBAYASHI  Nobuyuki YAMASAKI  

     
    PAPER

      Vol:
    E86-D No:10
      Page(s):
    2040-2048

    The imprecise computation model is one of the flexible computation models used to construct real-time systems. It is especially useful when the worst case execution times are difficult to estimate or the execution times vary widely. Although there are several ways to implement this model, they have not attained much attentions of real-world application programmers to date due to their unrealistic assumptions and high dependency on the execution environment. In this paper, we present an integrated approach for implementing the imprecise computation model. In particular, our research covers three aspects. First, we present a new imprecise computation model which consists of a mandatory part, an optional part, and another mandatory part called wind-up part. This wind-up part allows application programmers to explicitly incorporate into their programs the exact operations needed for safe degradation of performance when there is a shortage in resources. Second, we describe a scheduling algorithm called Mandatory-First with Wind-up Part (M-FWP) which is based on the Earliest Deadline First strategy. This algorithm, unlike scheduling algorithms developed for the classical imprecise computation model, is capable of scheduling a mandatory portion after an optional portion. Third, we present a dynamic priority server method for an efficient implementation of the M-FWP algorithm. We also show that the number of the proposed server at most needed per node is one. In order to estimate the performance of the proposed approach, we have implemented a real-time operating system called RT-Frontier. The experimental analyses have proven its ability to implement tasks based on the imprecise computation model without requiring any knowledge on the execution time of the optional part. Moreover, it also showed performance gain over the traditional checkpointing technique.

  • Parallel Algorithms for Finding the Center of Interval and Circular-Arc Graphs

    Fang Rong HSU  Man Kwan SHAN  

     
    LETTER-Graphs and Networks

      Vol:
    E86-A No:10
      Page(s):
    2704-2709

    The center problem of a graph is motivated by a number of facility location problems. In this paper, we propose parallel algorithms for finding the center of interval graphs and circular-arc graphs. Our algorithms run in O(log n) time algorithm using O(n/log n) processors while the intervals and arcs are given in sorted order. Our algorithms are on the EREW PRAM model.

  • Two Algorithms for Random Number Generation Implemented by Using Arithmetic of Limited Precision

    Tomohiko UYEMATSU  Yuan LI  

     
    PAPER-Information Security

      Vol:
    E86-A No:10
      Page(s):
    2542-2551

    This paper presents two different algorithms for random number generation. One algorithm generates a random sequence with an arbitrary distribution from a sequence of pure random numbers, i.e. a sequence with uniform distribution. The other algorithm generates a sequence of pure random numbers from a sequence of a given i.i.d. source. Both algorithms can be regarded as an implementation of the interval algorithm by using the integer arithmetic with limited precision. We analyze the approximation error measured by the variational distance between probability distributions of the desired random sequence and the output sequence generated by the algorithms. Further, we give bounds on the expected length of input sequence per one output symbol, and compare it with that of the original interval algorithm.

  • High-Level Synthesis by Ants on a Tree

    Rachaporn KEINPRASIT  Prabhas CHONGSTITVATANA  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E86-A No:10
      Page(s):
    2659-2669

    In this paper an algorithm based on Ant Colony Optimization techniques called Ants on a Tree (AOT) is introduced. This algorithm can integrate many algorithms together to solve a single problem. The strength of AOT is demonstrated by solving a High-Level Synthesis problem. A High-Level Synthesis problem consists of many design steps and many algorithms to solve each of them. AOT can easily integrate these algorithms to limit the search space and use them as heuristic weights to guide the search. During the search, AOT generates a dynamic decision tree. A boosting technique similar to branch and bound algorithms is applied to guide the search in the decision tree. The storage explosion problem is eliminated by the evaporation of pheromone trail generated by ants, the inherent property of our search algorithm.

  • A Computationally Efficient Energy-Aware Multicast Tree Recovery Algorithm for Ad Hoc Network

    Jim M. NG  Sadagopan SRIDHARAN  Chor Ping LOW  

     
    PAPER-Network

      Vol:
    E86-B No:9
      Page(s):
    2701-2708

    Multicasting is an efficient communication tool for use in multi-point applications such as conferencing and information distribution. In ad hoc networks, node mobility causes frequent changes of network topology, and re-construction of the multicast tree in an efficient and effective manner becomes a critical issues. In case of link breakage, most of the multicast tree construction protocols available presently require either a total re-build of the tree or to reconnect a disjoined node back to the multicast tree via the shortest path which may disrupt the optimising factors, such as energy consumption, delay or cost, used in the building of the original tree. In this paper, we introduce a computationally efficient recovery algorithm which will also minimise the power consumption on the tree.

  • A NLMS Algorithm for Frequency Offset Estimation of OFDM Communications

    Ann-Chen CHANG  Zhi-Feng HUANG  

     
    LETTER-Wireless Communication Technology

      Vol:
    E86-B No:9
      Page(s):
    2823-2827

    In this letter, we present a normalized least-mean-square algorithm of blind estimator for carrier frequency offset estimation of orthogonal frequency division multiplexing systems. In conjunction with the closed-loop estimate structure, the proposed efficient algorithm eliminates the inter-carrier interference for time varying carrier frequency offset. The proposed algorithm offers faster convergence speed and more accuracy to the carrier frequency offset estimate. Several computer simulation examples are presented for illustrating and effectiveness of the proposed algorithm.

  • An OSIC Based Reduced-Rank MIMO Equalizer Using Conjugate Gradient Algorithm

    Chung-Lien HO  Gau-Joe LIN  Ta-Sung LEE  

     
    PAPER-Wireless Communication Technology

      Vol:
    E86-B No:9
      Page(s):
    2656-2664

    A reduced complexity multiple-input multiple-output (MIMO) equalizer with ordered successive interference cancellation (OSIC) is proposed for combating intersymbol interference (ISI) and cochannel interference (CCI) over frequency-selective multipath channels. It is developed as a reduced-rank realization of the conventional MMSE decision feedback equalizer (DFE). In particular, the MMSE weight vectors at each stage of OSIC are computed based on the generalized sidelobe canceller (GSC) technique and reduced-rank processing is incorporated by using the conjugate gradient (CG) algorithm for reduced complexity implementation. The CG algorithm leads to a best low-rank representation of the GSC blocking matrix via an iterative procedure, which in turn gives a reduced-rank equalizer weight vector achieving the best compromise between ISI and CCI suppression. With the dominating interference successfully cancelled at each stage of OSIC, the number of iterations required for the convergence of the CG algorithm decreases accordingly for the desired signal. Computer simulations demonstrate that the proposed reduced-rank MIMO DFE can achieve nearly the same performance as the full-rank MIMO MMSE DFE with an effective rank much lower than the dimension of the signal-plus-interference subspace.

  • An A* Search in Sentential Matching for Question Answering

    Tatsunori MORI  Tomohiro OHTA  Katsuyuki FUJIHATA  Ryutaro KUMON  

     
    PAPER

      Vol:
    E86-D No:9
      Page(s):
    1658-1668

    In this paper, we propose a method to introduce an A* search control into sentential matching mechanism for question-answering systems, in order to reduce the response time while the accuracy of the answer is preserved. The question answering is a new technology to retrieve not relevant documents but the answer(s) directly by combining several methodology including IR and IE. One of the essential processes is the sentential matching between the user's query and each sentence in documents. In general, in order to obtain matching score precisely in higher resolution, we need some processes with higher computational costs. We therefore introduce an A* search in which both the processing cost and the resolution of matching score are took into account simultaneously. According to the experiments in NTCIR3 QAC1 Task1, the system with the controlled search is 3.4-8.5 times faster than the system with no control.

  • A Study on the Behavior of Genetic Algorithms on NK-Landscapes: Effects of Selection, Drift, Mutation, and Recombination

    Hernan AGUIRRE  Kiyoshi TANAKA  

     
    PAPER-Neuro, Fuzzy, GA

      Vol:
    E86-A No:9
      Page(s):
    2270-2279

    NK-Landscapes are stochastically generated fitness functions on bit strings, parameterized with N bits and K epistatic interactions between bits. The term epistasis describes nonlinearities in fitness functions due to changes in the values of interacting bits. Empirical studies have shown that the overall performance of random bit climbers on NK-Landscapes is superior to the performance of some simple and enhanced genetic algorithms (GAs). Analytical studies have also lead to suggest that NK-Landscapes may not be appropriate for testing the performance of GAs. In this work we study the effect of selection, drift, mutation, and recombination on NK-Landscapes for N = 96. We take a model of generational parallel varying mutation GA (GA-SRM) and switch on and off its major components to emphasize each of the four processes mentioned above. We observe that using an appropriate selection pressure and postponing drift make GAs quite robust on NK-Landscapes; different to previous studies, even simple GAs with these two features perform better than a random bit climber (RBC+) for a broad range of classes of problems (K 4). We also observe that the interaction of parallel varying mutation with crossover improves further the reliability of the GA, especially for 12 < K < 32. Contrary to intuition, we find that for small K a mutation only evolutionary algorithm (EA) is very effective and crossover may be omitted; but the relative importance of crossover interacting with varying mutation increases with K performing better than mutation alone (K > 12). This work indicates that NK-Landscapes are useful for testing each one of the major processes involved in a GA and for assessing the overall behavior of a GA on complex non-linear problems. This study also gives valuable guidance to practitioners applying GAs to real world problems of how to configure the GA to achieve better results as the non-linearity and complexity of the problem increases.

  • Topic Keyword Identification for Text Summarization Using Lexical Clustering

    Youngjoong KO  Kono KIM  Jungyun SEO  

     
    PAPER

      Vol:
    E86-D No:9
      Page(s):
    1695-1701

    Automatic text summarization has the goal of reducing the size of a document while preserving its content. Generally, producing a summary as extracts is achieved by including only sentences which are the most topic-related. DOCUSUM is our summarization system based on a new topic keyword identification method. The process of DOCUSUM is as follows. First, DOCUSUM converts the content words of a document into elements of a context vector space. It then constructs lexical clusters from the context vector space and identifies core clusters. Next, it selects topic keywords from the core clusters. Finally, it generates a summary of the document using the topic keywords. In the experiments on various compression ratios (the compression of 30%, the compression of 10%, and the extraction of the fixed number of sentences: 4 or 8 sentences), DOCUSUM showed better performance than other methods.

  • Node-to-Set Disjoint Paths Problem in Pancake Graphs

    Keiichi KANEKO  Yasuto SUZUKI  

     
    PAPER-Algorithms and Applications

      Vol:
    E86-D No:9
      Page(s):
    1628-1633

    In this paper, we give an algorithm for the node-to-set disjoint paths problem in pancake graphs with its evaluation results. The algorithm is of polynomial order of n for an n-pancake graph. It is based on recursion and divided into two cases according to the distribution of destination nodes in classes into which all the nodes in a pancake graph are categorized. The sum of lengths of paths obtained and the time complexity of the algorithm are estimated and the average performance is evaluated based on computer simulation.

  • Parallel Algorithms for Higher-Dimensional Euclidean Distance Transforms with Applications

    Yuh-Rau WANG  Shi-Jinn HORNG  Yu-Hua LEE  Pei-Zong LEE  

     
    INVITED PAPER-Algorithms and Applications

      Vol:
    E86-D No:9
      Page(s):
    1586-1593

    Based on the dimensionality reduction technique and the solution for proximate points problem, we achieve the optimality of the three-dimensional Euclidean distance transform (3D_EDT) computation. For an N N N binary image, our algorithms for both 3D_EDT and its applications can be performed in O (log log N) time using CRCW processors or in O (log N) time using EREW processors. To the best of our knowledge, all results described above are the best known. As for the n-dimensional Euclidean distance transform (nD_EDT) and its applications of a binary image of size Nn, all of them can be computed in O (nlog log N) time using CRCW processors or in O (nlog N) time using EREW processors.

  • Integrated Pre-Fetching and Replacing Algorithm for Graceful Image Caching

    Zhou SU  Teruyoshi WASHIZAWA  Jiro KATTO  Yasuhiko YASUDA  

     
    PAPER-Multimedia Systems

      Vol:
    E86-B No:9
      Page(s):
    2753-2763

    The efficient distribution of stored information has become a major concern in the Internet. Since the web workload characteristics show that more than 60% of network traffic is caused by image documents, how to efficiently distribute image documents from servers to end clients is an important issue. Proxy cache is an efficient solution to reduce network traffic. And it has been shown that an image caching method (Graceful Caching) based on hierarchical coding format performs better than conventional caching schemes in recent years. However, as the capacity of the cache is limited, how to efficiently allocate the cache memory to achieve a minimum expected delay time is still a problem to be resolved. This paper presents an integrated caching algorithm to deal with the above problem for image databases, web browsers, proxies and other similar applications in the Internet. By analyzing the web request distribution of the Graceful Caching, both replacing and pre-fetching algorithms are proposed. We also show that our proposal can be carried out based on information readily available in the proxy server; it flexibly adapts its parameters to the hit rates and access pattern of users' requesting documents in the Graceful Caching. Finally we verify the performance of this algorithm by simulations.

  • Fast Motion Estimation Based on Binary Edge Information

    Won Bae PARK  Nae Joung KWAK  Young Jun SONG  Jae Hyeong AHN  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:8
      Page(s):
    1456-1458

    In this paper, we propose a fast full-search block matching algorithm for motion estimation, based on binary edge information. The binary edge information allows a faster search by reducing the computational complexity. It also reduces error, which is generated by the block located on the boundary of moving objects. After we transform the input image into an edge-based image using Sobel masks, we convert the result into a binary edge image using median-cut quantization. We then perform block matching using the binary edge image. If there exists blocks such that the error of the binary block matching exceeds threshold, we only perform edge intensity-based block matching within those blocks. We improve computational efficiency by eliminating an unnecessary searching process in no-motion regions. Simulation results have shown that the proposed method reduces the computational complexity and provides similar PSNR performance to the Full Search Block Matching Algorithm (FS-BMA)

  • Mixture Density Models Based on Mel-Cepstral Representation of Gaussian Process

    Toru TAKAHASHI  Keiichi TOKUDA  Takao KOBAYASHI  Tadashi KITAMURA  

     
    PAPER

      Vol:
    E86-A No:8
      Page(s):
    1971-1978

    This paper defines a new kind of a mixture density model for modeling a quasi-stationary Gaussian process based on mel-cepstral representation. The conventional AR mixture density model can be applied to modeling a quasi-stationary Gaussian AR process. However, it cannot model spectral zeros. In contrast, the proposed model is based on a frequency-warped exponential (EX) model. Accordingly, it can represent spectral poles and zeros with equal weights, and, furthermore, the model spectrum has a high resolution at low frequencies. The parameter estimation algorithm for the proposed model was also derived based on an EM algorithm. Experimental results show that the proposed model has better performance than the AR mixture density model for modeling a frequency-warped EX process.

  • A Modified Genetic Algorithm for Multiuser Detection in DS/CDMA Systems

    Mahrokh G. SHAYESTEH  Mohammad B. MENHAJ  Babak G. NOBARY  

     
    PAPER-Wireless Communication Technology

      Vol:
    E86-B No:8
      Page(s):
    2377-2388

    Multiple access interference and near-far effect cause the performance of the conventional single user detector in DS/CDMA systems to degrade. Due to high complexity of the optimum multiuser detector, suboptimal multiuser detectors with less complexity and reasonable performance have received considerable attention. In this paper we apply the classic and a new modified genetic algorithm for multiuser detection of DS/CDMA signals. It is shown that the classic genetic algorithm (GA) reaches an error floor at high signal to noise ratios (SNR) while the performance of proposed modified GA is much better than the classic one and is comparable to the optimum detector with much less complexity. The results hold true for AWGN and fading channels. We also describe another GA called as meta GA to find the optimum parameters of the modified GA. We compare the performance of proposed method with the other detectors used in CDMA.

1321-1340hit(2137hit)