The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] algorithm(2137hit)

1281-1300hit(2137hit)

  • Design and Evaluation of a High Speed Routing Lookup Architecture

    Jun ZHANG  JeoungChill SHIM  Hiroyuki KURINO  Mitsumasa KOYANAGI  

     
    PAPER-Implementation and Operation

      Vol:
    E87-B No:3
      Page(s):
    406-412

    The IP routing lookup problem is equivalent to finding the longest prefix of a packet's destination address in a routing table. It is a challenging problem to design a high performance IP routing lookup architecture, because of increasing traffic, higher link speed, frequent updates and increasing routing table size. At first, increasing traffic and higher link speed require that the IP routing can be executed at wire speed. Secondly, frequent routing table updates require that the insertion and deletion operations should be simple and low delay. At last, increasing routing table size hopes that less memory is used in order to reduce cost. Although many schemes to achieve fast lookup exist, less attention is paid on the latter two factors. This paper proposed a novel pipelined IP routing lookup architecture using selective binary search on hash table organized by prefix lengths. The evaluation results show that it can perform IP lookup operations at a maximum rate of one lookup per cycle. The hash operation ratio for one lookup can be reduced to about 1%, less than two hash operations are needed for one table update and only 512 kbytes SRAM is needed for a routing table with about 43000 prefixes. It proves to have higher performance than the existing schemes.

  • Don't Care Identification and Statistical Encoding for Test Data Compression

    Seiji KAJIHARA  Kenjiro TANIGUCHI  Kohei MIYASE  Irith POMERANZ  Sudhakar M. REDDY  

     
    PAPER-Test Generation and Compaction

      Vol:
    E87-D No:3
      Page(s):
    544-550

    This paper describes a method of test data compression for a given test set using statistical encoding. In order to maximize the effectiveness of statistical encoding, the method first converts some specified input values in the test set to unspecified ones without losing fault coverage, and then reassigns appropriate logic values to the unspecified inputs. Experimental results for ISCAS-89 benchmark circuits show that the proposed method can on the average reduce the test data volume to less than 25% of that required for the original test set.

  • Finding Web Communities by Maximum Flow Algorithm Using Well-Assigned Edge Capacities

    Noriko IMAFUJI  Masaru KITSUREGAWA  

     
    PAPER

      Vol:
    E87-D No:2
      Page(s):
    407-415

    A web community is a set of web pages that provide resources on a specific topic. Various methods for finding web communities based on link analysis have been proposed in the literature. The method proposed in this paper is based on the method using the maximum flow algorithm proposed in. Our objective of using the maximum flow algorithm is to extract a subgraph which can be recognized as a good web community in the context of the quantity and the quality. This paper first discusses the features of the maximum flow algorithm based method. The previously proposed approach has a problem that a certain graph structure containing noises (i.e., irrelevant pages) is always extracted. This problem is mainly caused by edge capacities assigned a constant value. This paper proposes an assignment of variable edge capacities that are based on hub and authority scores obtained from HITS calculation. To examine the effects of our proposed method, we performed experiments using a Japanese archive crawled in February 2002. Our experimental results demonstrate that our proposed method removes noise pages caused by constant edge capacities and improves the quality of web communities.

  • A Near-Optimum Parallel Algorithm for a Graph Layout Problem

    Rong-Long WANG  Xin-Shun XU  Zheng TANG  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E87-A No:2
      Page(s):
    495-501

    We present a learning algorithm of the Hopfield neural network for minimizing edge crossings in linear drawings of nonplanar graphs. The proposed algorithm uses the Hopfield neural network to get a local optimal number of edge crossings, and adjusts the balance between terms of the energy function to make the network escape from the local optimal number of edge crossings. The proposed algorithm is tested on a variety of graphs including some "real word" instances of interconnection networks. The proposed learning algorithm is compared with some existing algorithms. The experimental results indicate that the proposed algorithm yields optimal or near-optimal solutions and outperforms the compared algorithms.

  • Improving Reservation Protocol for Ad Hoc Networks Using Two-Division MAC Backoff Algorithm

    Xuejun TIAN  Tetsuo IDEGUCHI  Takashi OKUDA  

     
    PAPER-Network

      Vol:
    E87-D No:2
      Page(s):
    436-443

    An Ad Hoc network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any existing network infrastructure or centralized administration. The choice of medium access is difficult in Ad Hoc networks due to the time-varying network topology and the lack of centralized control. In this paper, we propose a novel multichannel schedule-based Medium Access Control (MAC) protocol for Ad Hoc networks named Multichannel Reservation Protocol for TDMA-based networks (MRPT). MRPT ensures collision free in successfully reserved data links, even when hidden terminals exist. The reservation of MRPT is based a control channel and in order to improve throughput we propose Four-Phase-Two-Division (FPTD) as a media access scheme of the control channel for broadcasting control or reservation messages. In FPTD, the collision can be solved rapidly with an efficient backoff algorithm which results in that system block is avoided in case of high traffic. In this paper, we also present the throughput performance of MRPT, which shows a high value and no system block even in case of high traffic load.

  • Highly Concurrent Group Mutual Exclusion Algorithms Based on Ticket Orders

    Masataka TAKAMURA  Yoshihide IGARASHI  

     
    PAPER

      Vol:
    E87-D No:2
      Page(s):
    322-329

    Group mutual exclusion is an interesting generalization of the mutual exclusion problem. This problem was introduced by Joung, and some algorithms for the problem have been proposed by incorporating mutual exclusion algorithms. Group mutual exclusion occurs naturally in a situation where a resource can be shared by processes of the same group, but not by processes of a different group. It is also called the congenial talking philosophers problem. In this paper we propose two algorithms based on ticket orders for the group mutual exclusion problem on the asynchronous shared memory model. These algorithms are some modifications of the Bakery algorithm. They satisfy lockout freedom and a high degree of concurrency performance. Each of these algorithms uses single-writer shared variables together with two multi-writer shared variables that are never concurrently written. One of these algorithms has another desirable property, called smooth admission. By this property, during the period that the resource is occupied by the leader (called the chair), a process wishing to join the same group as the leader's group can be granted use of the resource in constant time.

  • Time-Efficient Multicast to Local Vertices in Star Interconnection Networks under the Single-Port Model

    Satoshi FUJITA  

     
    PAPER

      Vol:
    E87-D No:2
      Page(s):
    315-321

    In this paper, we consider the problem of constructing a multicast tree in the star graph under the single-port communication model. Unlike previous studies for constructing space-efficient multicast trees, we adopt the completion time of each multicast as the objective function to be minimized. In particular, we study a special case of the problem in which all destination vertices are immediate neighbors of the source vertex, and propose a multicast scheme for the star graph of dimension n in 1.3125log2 n + O(log log n) time units. This running time is at most 1.3125 times of that of an optimal scheme.

  • A Self-Stabilizing Distributed Algorithm for the Steiner Tree Problem

    Sayaka KAMEI  Hirotsugu KAKUGAWA  

     
    PAPER

      Vol:
    E87-D No:2
      Page(s):
    299-307

    Self-stabilization is a theoretical framework of non-masking fault-tolerant distributed algorithms. In this paper, we investigate the Steiner tree problem in distributed systems, and propose a self-stabilizing heuristic solution to the problem. Our algorithm is constructed by four layered modules (sub-algorithms): construction of a shortest path forest, transformation of the network, construction of a minimum spanning tree, and pruning unnecessary links and processes. Competitiveness is 2(1-1/l), where l is the number of leaves of optimal solution.

  • Schema Co-Evolutionary Algorithm (SCEA)

    Kwee-Bo SIM  Dong-Wook LEE  

     
    PAPER-Algorithms

      Vol:
    E87-D No:2
      Page(s):
    416-425

    Simple genetic algorithm (SGA) is a population-based optimization method based on the Darwinian natural selection. The theoretical foundations of SGA are the Schema Theorem and the Building Block Hypothesis. Although SGA does well in many applications as an optimization method, it still does not guarantee the convergence of a global optimum in GA-hard problems and deceptive problems. As an alternative schema, therefore, there is a growing interest in a co-evolutionary system where two populations constantly interact and cooperate each other. In this paper we propose a schema co-evolutionary algorithm (SCEA) and show why the SCEA works better than SGA in terms of an extended schema theorem. The experimental analyses using the Walsh-Schema Transform show that the SCEA works well in GA-hard problems including deceptive problems.

  • Efficient Generation of Plane Triangulations with Specified Maximum Degree

    Hiroyuki TANAKA  Shin-ichi NAKANO  

     
    PAPER

      Vol:
    E87-D No:2
      Page(s):
    330-336

    A "based" plane triangulation is a plane triangulation with one designated edge on the outer face. In this paper we give a simple algorithm to generate all biconnected based plane triangulations having exactly n vertices and with the maximum degree exactly D. The algorithm uses O(n) space and generates such triangulations in O(1) time per triangulation without duplications.

  • Improvement of Wavelet Based Parameter Estimations of Nearly 1/f Processes

    Shigeo WADA  Nao ITO  

     
    PAPER-Digital Signal Processing

      Vol:
    E87-A No:2
      Page(s):
    417-423

    Nearly 1/f processes are known as important stochastic models for scale invariant data analysis in a number of fields. In this paper, two parameter estimation methods of nearly 1/f processes based on wavelets are proposed. The conventional method based on wavelet transform with EM algorithm does not give the reliable parameter estimation value when the data length is short. Moreover, the precise parameter value is not estimated when the spectrum gap exists in 1/f processes. First, in order to improve the accuracy of the estimation when the data length is short, a parameter estimation method based on wavelet transform with complementary sampling is proposed. Next, in order to reduce the effect of spectrum gap, a parameter estimation method based on wavelet packet with EM algorithm is proposed. Simulation results are given to verify the effectiveness of the proposed methods.

  • A Time- and Communication-Optimal Distributed Sorting Algorithm in a Line Network and Its Extension to the Dynamic Sorting Problem

    Atsushi SASAKI  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E87-A No:2
      Page(s):
    444-453

    This paper presents a strictly time- and communication-optimal distributed sorting algorithm in a line network. A strictly time-optimal distributed sorting algorithm in a line network has already been designed. However, its communication complexity is not strictly optimal and it seems to be difficult to extend it to other problems, such as that related to multiple elements in a process, and also the dynamic sorting problem where the number of elements each process should have as its solution is not the same as that in the initial state. Therefore, the algorithm in this paper was designed by an alternative approach to make it strictly time- and communication-optimal. Moreover, an extension to the dynamic sorting problem is described.

  • On Group Multicast Routing with Bandwidth Constraint: A Lower Bound and Performance Evaluation

    Chor Ping LOW  Ning WANG  

     
    PAPER-Network

      Vol:
    E87-B No:1
      Page(s):
    124-131

    Group multicasting is a generalization of multicasting whereby every member of a group is allowed to multicast messages to other members that belongs to the same group. The group multicast routing problem (GMRP) is that of finding a set of multicast trees with bandwidth requirements, each rooted at a member of the group, for multicasting messages to other members of the group. An optimal solution to GMRP is a set of trees, one for each member of the group, that incurs the least overall cost. This problem is known to be NP-complete and hence heuristic algorithms are likely to be the only viable approach for computing near optimal solutions in practice. In this paper, we derive a lower bound on the cost of an optimal solution to GMRP by using Lagrangean Relaxation and Subgradient Optimization. This lower bound is used to evaluate the two existing heuristic algorithms in terms of their ability to find close-to-optimal solutions.

  • Coefficients Generation for the 4th-Order Leapfrog Sigma-Delta A/D Converters

    Wen-Bin LIN  Bin-Da LIU  

     
    PAPER-Analog Signal Processing

      Vol:
    E87-A No:1
      Page(s):
    231-242

    In this paper, a novel methodology for designing and analyzing high performance sigma-delta leapfrog modulators for ultra-high resolution analog-to-digital (A/D) converters is presented. The less sensitive topology, the leapfrog topology, in component variations is analyzed by considering the noise transfer function (NTF). By using theoretical analysis, the loop coefficients are constrained to a small, clear and definite range called the stable region (SR). With the output voltage limited within 2 V, an absolutely stable region (ASR) is obtained. A program that analyzes and generates the required coefficients is constructed for easily designing this topology. For a 256 over-sampling ratio (OSR) and the coefficients from ASR, the signal to noise ratio (SNR) and dynamic range (DR) are 105 dB and 100 dB, respectively. In accordance with the behavior simulation results, the system is not only stable and efficient but also suitable for high-resolution applications.

  • Decomposition Approach of Banker's Algorithm: Design and Concurrency Analysis

    Hoon OH  

     
    PAPER-Software Systems

      Vol:
    E87-D No:1
      Page(s):
    183-195

    Concurrency in the computing systems using a deadlock avoidance strategy varies largely according to the way that resource usage plan of process is used in testing the possibility of deadlocks. If a detailed resource usage plan of process is to be taken into account, the deadlock avoidance problem is known to be NP-complete. A decomposition model to manage resources is proposed, where process is logically partitioned into a number of segments each of which uses at least one resource. It is shown that one of our deadlock avoidance algorithms achieves the best concurrency among the polynomial-bounded algorithms. We also present a heuristic algorithm that achieves concurrency close to the optimal one. Finally, we analyze concurrency of various algorithms.

  • Combinatorial Effects of Timer Control and Backoff Algorithms on Bulk Data Transfer over Two-State Markovian Channels

    Katsumi SAKAKIBARA  Takashi GONDA  Jiro YAMAKITA  

     
    LETTER-Fundamental Theories

      Vol:
    E87-B No:1
      Page(s):
    165-170

    We analytically investigate combinatorial effects of timer control and backoff algorithms on performance of bulk data transfer over two-state Markovian packet error channels. Numerical results for throughput, energy efficiency, and the probabilities of packet loss and loss of bulk data indicate that linear backoff algorithms outperform binary exponential ones as a whole when they are employed at the logical link sublayer with timer control.

  • A Note on the Lattice Factoring Method

    Tetsuya IZU  

     
    LETTER

      Vol:
    E87-A No:1
      Page(s):
    221-223

    In 1999, Boneh et al. proposed the Lattice Factoring Method (LFM) for the integer factoring problem for a composite of the form N = prq by employing the LLL-algorithm. Time complexity of LFM is measured by the number of calls of the LLL-algorithm. In the worst case, the number is 2log p for a certain constant c. In 2001, Uchiyama and Kanayama introduced a novel criterion and provided an improved algorithm which runs (2k-p)/|p-Nr+1| times faster (for certain constants k, Nr+1). In this letter, we note another practical improvement applicable to the original and the improved LFM, which enables to provide about 2 times speed-up.

  • Analysis of Baby-Step Giant-Step Algorithms for Non-uniform Distributions

    Koh-ichi NAGAO  Shigenori UCHIYAMA  Naoki KANAYAMA  Kazuto MATSUO  

     
    PAPER-Fundamental

      Vol:
    E87-A No:1
      Page(s):
    10-17

    The baby-step giant-step algorithm, BSGS for short, was proposed by Shanks in order to compute the class number of an imaginary quadratic field. This algorithm is at present known as a very useful tool for computing with respect to finite groups such as the discrete logarithms and counting the number of the elements. Especially, the BSGS is normally made use of counting the rational points on the Jacobian of a hyperelliptic curve over a finite field. Indeed, research on the practical improvement of the BSGS has recently received a lot of attention from a cryptographic viewpoint. In this paper, we explicitly analyze the modified BSGS, which is for non-uniform distributions of the group order, proposed by Blackburn and Teske. More precisely, we refine the Blackburn-Teske algorithm, and also propose a criterion for the decision of the effectiveness of their algorithm; namely, our proposed criterion explicitly shows that what distribution is needed in order that their proposed algorithm is faster than the original BSGS. That is, we for the first time present a necessary and sufficient condition under which the modified BSGS is effective.

  • An Algorithm to Use in Adaptive Wideband Duplexer for Software Radio

    Shyama KANNANGARA  Michael FAULKNER  

     
    PAPER

      Vol:
    E86-B No:12
      Page(s):
    3452-3455

    This paper proposes a new algorithm to control an adaptive duplexer for multiband software radio. It uses a wideband low isolation device combined with a two-tap/two-loop adjustable canceller to eliminate the need for multiple switched high isolation duplexers. The taps are adjusted to provide isolation peaks in the transmit and receive bands. The algorithm is based on the superposition of squared errors and achieved 66 dB isolation of the transmit signal and a 37 dB cancellation of the transmitter noise in the receiver band.

  • A Self-Adjusting Destage Algorithm with High-Low Water Mark in Cached RAID5

    Young Jin NAM  Chanik PARK  

     
    PAPER-Dependable Systems

      Vol:
    E86-D No:12
      Page(s):
    2527-2535

    The High-Low Water Mark destage (HLWM) algorithm is widely used to enable cached RAID5 to flush dirty data from its write cache to disks due to the simplicity of its operations. It starts and stops a destaging process based on the two thresholds that are configured at the initialization time with the best knowledge of its underlying storage performance capability and its workload pattern which includes traffic intensity, access patterns, etc. However, each time the current workload varies from the original, the thresholds need to be re-configured with the changed workload. This paper proposes an efficient destage algorithm which automatically re-configures its initial thresholds according to the changed traffic intensity and access patterns, called adaptive thresholding. The core of adaptive thresholding is to define the two thresholds as the multiplication of the referenced increasing and decreasing rates of the write cache occupancy level and the time required to fill and empty the write cache. We implement the proposed algorithm upon an actual RAID system and then verify the ability of the auto-reconfiguration with synthetic workloads having a different level of traffic intensity and access patterns. Performance evaluations under well-known traced workloads reveal that the proposed algorithm reduces disk IO traffic by about 12% with a 6% increase in the overwrite ratio compared with the HLWM algorithm.

1281-1300hit(2137hit)