The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ALG(2355hit)

2321-2340hit(2355hit)

  • A 34.8 GHz 1/4 Static Frequency Divider Using AlGaAs/GaAs HBTs

    Yoshiki YAMAUCHI  Osaake NAKAJIMA  Koichi NAGATA  Hiroshi ITO  Tadao ISHIBASHI  

     
    PAPER

      Vol:
    E75-C No:10
      Page(s):
    1105-1109

    A one-by-four static frequency divider using AlGaAs/GaAs heterojunction bipolar transistors (HBTs) was designed to operate at a bias condition that gave a maximum cutoff frequency fT and a maximum oscillation freqency fmax. The fT and fmax applied to the divider were 68 GHz and 56 GHz, respectively. As a result of the tests, the circuit operated up to 34.8 GHz at a power supply voltage of 9 V and power dissipation of 495 mW. A low minimum input signal power level of 0 dBm was also achieved.

  • Priority-List Scheduling in Timed Petri Nets

    Takenobu TANIDA  Toshimasa WATANABE  Masahiro YAMAUCHI  Kinji ONAGA  

     
    PAPER

      Vol:
    E75-A No:10
      Page(s):
    1394-1406

    The subject of the paper is to propose two approximation algorithms FM_SPLA, FM_DPLA for priority-list scheduling in timed Petri nets. Their capability is compared with that of existing algorithms SPLA, DPLA through experimental results, where SPLA and DPLA have previously been proposed by the authors.

  • Placement and Routing Algorithms for One-Dimensional CMOS Layout Synthesis with Physical Constraints

    Katsunori TANI  

     
    PAPER

      Vol:
    E75-A No:10
      Page(s):
    1286-1293

    This paper deals with the sub-problems of generating a mask pattern from the logical description of a large-scale CMOS circuit. The large-scale layout can be generated in divide-and-conquer style: divide a given circuit into a set of sub-circuits, generate the layout of each sub-circuit, and merge the resulting layouts to create the whole layout. This paper proposes a layout synthesis algorithm for a sub-circuit with physical constraints for the synthesis scheme above. The physical constraints considered here are the relative placement of logic cells (sets of logic gates) and the routing constraint based on the costs of wiring layers and vias. These constraints will be given by the global optimizer in a two-dimensional layout synthesis routine, and they should be kept at the subsequent one-dimensional layout synthesis for a sub-circuit. The latter is also given for enhancing the circuit performance by limiting the usage of wiring layers and vias for special net such as a clock net. The placement constraint is maintained using PQ-tree, a tree structure representing a set of restricted permutations of elements. One-dimensional layout synthesis determines the placement of transistors by the enhanced pairwise exchanging method under the PQ-tree representation. The routing constraints is considered in the newly developed line-search routing method using a cost-based searching. Experimental results for practical standard cells, including up to 200 transistors, prove that the algorithms can produce the layouts comparable to handcrafted cells. Also on a two-dimensional layout synthesis using the algorithms, the results for benchmark circuits of Physical Design Workshop 1989, i.e., MCNC benchmark circuits, are superior to the best results exhibited at Design Automation Conference 1990.

  • The Minimum Initial Marking Problem for Scheduling in Timed Petri Nets

    Toshimasa WATANABE  Takenobu TANIDA  Masahiro YAMAUCHI  Kenji ONAGA  

     
    PAPER

      Vol:
    E75-A No:10
      Page(s):
    1407-1421

    The subject of the paper is the minimum initial marking problem for scheduling in timed Petri net PN: given a vector X of nonnegative integers, a P-invariant Y of PN and a nonnegative integer π, find an initial marking M minimizing the value YtrM among those initial marking M such that there is a scheduling σ having the total completion time τ(σ)π with respect M , X and PN (a sequence of transitions, with the first transition firable on M , such that every transition t can fire prescribed number X(t) of times). The paper shows NP-hardness of the problem and proposes two approximation algorithms with their experimental evaluation.

  • Net-Oriented Analysis and Design

    Shinichi HONIDEN  Naoshi UCHIHIRA  

     
    INVITED PAPER

      Vol:
    E75-A No:10
      Page(s):
    1317-1325

    Net-Oriented Analysis and Design (NOAD) is defined as three items: (1) Various nets are utilized as an effective modeling method. (2) Inter-relationships among verious nets are determined. (3) Verification or analysis methods for nets are provided and they are implemented based on the mathematical theory, that is Net theory. Very few methods have been presented to satisfy these three items. For example, the Real-Time SA method covers item (1) only. The Object-Oriented Analysis and Design method (OOA/OOD) covers items (1) and (2). NOAD can be regarded as an extension to OOA/OOD. This paper discusses how effectively various nets have been used in actual software development support metnods and tools and evaluates such several methods and tools from the NOAD viewpoint.

  • An Algorithm for the K-Selection Problem Using Special-Purpose Sorters

    Heung-Shik KIM  Jong-Soo PARK  Myunghwan KIM  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E75-D No:5
      Page(s):
    704-708

    An algorithm is presented for selecting the k-th smallest element of a totally ordered (but not sorted) set of n elements, 1kn, in the case that a special-purpose sorter is used as a coprocessor. When the pipeline merge sorter is used as the special-purpose sorter, we analyze the comparison complexity of the algorithm for the given capacity of the sorter. The comparison complexity of the algorithm is 1.4167no(n), provided that the capacity of the sorter is 256 elements. The comparison complexity of the algorithm decreases as the capacity of the sorter increases.

  • Generalized Syndrome Polynomials for Decoding Reed-Solomon Codes

    Kiyomichi ARAKI  Ikuo FUJITA  

     
    LETTER-Information Theory and Coding Theory

      Vol:
    E75-A No:8
      Page(s):
    1026-1029

    In this letter, a generalized syndrome polynomial is proposed from which several decoding key-equations for Reed-Solomon codes can be derived systematically. These equations are always solved by the extended Euclidean algorithm.

  • Polynomial Time Inference of Unions of Two Tree Pattern Languages

    Hiroki ARIMURA  Takeshi SHINOHARA  Setsuko OTSUKI  

     
    PAPER

      Vol:
    E75-D No:4
      Page(s):
    426-434

    In this paper, we consider the polynomial time inferability from positive data for unions of two tree pattern languages. A tree pattern is a structured pattern known as a term in logic programming, and a tree pattern language is the set of all ground instances of a tree pattern. We present a polynomial time algorithm to find a minimal union of two tree pattern languages containing given examples. Our algorithm can be considered as a natural extension of Plotkin's least generalization algorithm, which finds a minimal single tree pattern language. By using this algorithm, we can realize a consistent and conservative polynomial time inference machine that identifies unions of two tree pattern languages from positive data in the limit.

  • ACE: A Syntax-Directed Editor Customizable from Examples and Queries

    Yuji TAKADA  Yasubumi SAKAKIBARA  Takeshi OHTANI  

     
    PAPER

      Vol:
    E75-D No:4
      Page(s):
    487-498

    Syntax-directed editors have several advantages in editing programs because programming is guided by the syntax and free from syntax errors. Nevertheless, they are less popular than text editiors. One of the reason is that they force a priori specified editing structures on the user and do not allow him to use his own structure. ACE (Algorithmically Customizable syntax-directed Editor) provides a solution for this problem by using a technique of machine learning; ACE has a special function of customizing the grammar algorithmically and interactively based on the learning method for grammars from examples and queries. The grammar used in the editor is customized through interaction with the user so that the user can edit his program in a more familiar structure. The customizing function has been implemented based on the methods for learning of context-free grammars from structural examples, for which the correctness and the efficiency are proved formally. This guarantees the soundness and the efficiency of customization. Furthermore, ACE can be used as an algorithmic and interactive tool to design grammars, which is required for several purposes such as compiler design and pretty-printer design.

  • Algorithmic Learning Theory with Elementary Formal Systems

    Setsuo ARIKAWA  Satoru MIYANO  Ayumi SHINOHARA  Takeshi SHINOHARA  Akihiro YAMAMOTO  

     
    INVITED PAPER

      Vol:
    E75-D No:4
      Page(s):
    405-414

    The elementary formal system (EFS, for short) is a kind of logic program which directly manipulates character strings. This paper outlines in brief the authors' studies on algorithmic learning theory developed in the framework of EFS's. We define two important classes of EFS's and a new hierarchy of various language classes. Then we discuss EFS's as logic programs. We show that EFS's form a good framework for inductive inference of languages by presenting model inference system for EFS's in Shapiro's sense. Using the framework we also show that inductive inference from positive data and PAC-learning are both much more powerful than they have been believed. We illustrate an application of our theoretical results to Molecular Biology.

  • Advanced Dimensioning Tool for Circuit-Switched Networks

    Masaaki SHINOHARA  

     
    PAPER

      Vol:
    E75-B No:7
      Page(s):
    594-600

    We have developed an advanced tool for dimensioning circuit-switched networks, called CNEP (Circuit-Switched Network Evaluation Program) , for effective design of digital networks. CNEP features a high-reliability network structure (node dispersion, double homing, etc) , both-way circuit operation, and circuit modularity (or big module size), all of which are critical for digital networks. CNEP also solves other dimensioning problems such as the cost difference between existing and newly installed circuits, and handles multi-hour traffic conditions, dynamic routing, and multiple-switching-unit nodes. Operations Research techniques are applied to produce exact and heuristic algorithms for these problems. Algorithms with good time-performance trade-off characteristics are chosen for CNEP.

  • Relationships between PAC-Learning Algorithms and Weak Occam Algorithms

    Eiji TAKIMOTO  Akira MARUOKA  

     
    PAPER

      Vol:
    E75-D No:4
      Page(s):
    442-448

    In the approximate learning model introduced by Valiant, it has been shown by Blumer et al. that an Occam algorithm is immediately a PAC-learning algorithm. An Occam algorithm is a polynomial time algorithm that produces, for any sequence of examples, a simple hypothesis consistent with the examples. So an Occam algorithm is thought of as a procedure that compresses information in the examples. Weakening the compressing ability of Occam algorithms, a notion of weak Occam algorithms is introduced and the relationship between weak Occam algorithms and PAC-learning algorithms is investigated. It is shown that although a weak Occam algorithm is immediately a (probably) consistent PAC-learning algorithm, the converse does not hold. On the other hand, we show how to construct a weak Occam algorithm from a PAC-learning algorithm under some natural conditions. This result implies the equivalence between the existence of a weak Occam algorithm and that of a PAC-learning algorithm. Since the weak Occam algorithms constructed from PAC-learning algorithms are deterministic, our result improves a result of Board and Pitt's that the existence of a PAC-learning algorithm is equivalent to that of a randomized Occam algorithm.

  • Numerical Stability and Multirate Effect in Waveform Relaxation Algorithm with Under Relaxation Technique

    Koichi HAYASHI  Hideki ASAI  

     
    PAPER-Combinational/Numerical/Graphic Algorithms

      Vol:
    E75-A No:6
      Page(s):
    685-690

    This paper describes the waveform relaxation (WR) algorithm with the under relaxation method based on the virtual state formulation (VSF) technique and the effect of multirate behavior in this algorithm. First, we present the virtual state relaxation method using VSF technique. Next, we introduce the VSF method into WR algorithm in order to exploit the multirate behavior. Furthermore, we construct the relaxation-based circuit simulator DESIRE2 and apply this simulator to the transient analysis of MOS circuits. Finally, we show that the present technique enables to use efficiently the multirate integration method in VSR and reduce the total simulation time without losing the waveform accuracy.

  • Scheduling a Task Graph onto a Message Passing Multiprocessor System

    Tsuyoshi KAWAGUCHI  

     
    PAPER-Combinational/Numerical/Graphic Algorithms

      Vol:
    E75-A No:6
      Page(s):
    670-677

    In this paper we study the problem of scheduling parallel program modules onto an MPS (message passing multiprocessor system) so as to minimize the total execution time. Each node in the interconnection network of the MPS has buffers at its input ports to store messages waiting for the transmission. An algorithm for finding a route which minimizes the communication delay of a message to be sent between a processor-pair is first given. Next, we present heuristic algorithms for scheduling program modules onto the MPS. These algorithms use the above routing algorithm. The performances of the proposed algorithms are estimated by using simulation experiments.

  • Multiterminal Filtering for Decentralized Detection Systems

    Te Sun HAN  Kingo KOBAYASHI  

     
    INVITED PAPER

      Vol:
    E75-B No:6
      Page(s):
    437-444

    The optimal coding strategy for signal detection in the correlated gaussian noise is established for the distributed sensors system with essentially zero transmission rate constraint. Specifically, we are able to obtain the same performance as in the situation of no restriction on rate from each sensor terminal to the fusion center. This simple result contrasts with the previous ad hoc studies containing many unnatural assumptions such as the independence of noises contaminating received signal at each sensor. For the design of optimal coder, we can use the classical Levinson-Wiggins-Robinson fast algorithm for block Toeplitz matrix to evaluate the necessary weight vector for the maximum-likelihood detection.

  • An Elastic-Block Matching Algorithm Using a Bilinear Space Warping

    Hansoo KIM  Jae-Kyoon KIM  

     
    PAPER-Digital Image Processing

      Vol:
    E75-A No:6
      Page(s):
    726-728

    A new Elastic-Block Matching Algorithm using bilinear space warping is proposed. In this scheme a convex quadrilateral, which minimizes a distortion measure against the current square block, is searched to compensate the shape deformation caused by a rigid body's 3 dimensional depth motion or rotation. The proposed algorithm gives a remarkable improvement in motion-compensated prediction compared with the conventional algorithm.

  • An Adaptive Antenna System for High-Speed Digital Mobile Communications

    Yasutaka OGAWA  Yasuyuki NAGASHIMA  Kiyohiko ITOH  

     
    PAPER-Antennas and Propagation

      Vol:
    E75-B No:5
      Page(s):
    413-421

    High-speed digital land mobile communications suffer from frequency-selective fading due to a long delay difference. Several techniques have been proposed to overcome the multipath propagation problem. Among them, an adaptive array antenna is suitable for very high-speed transmission because it can suppress the multipath signal of a long delay difference significantly. This paper describes the LMS adaptive array antenna for frequency-selective fading reduction and a new diversity technique. First, we propose a method to generate a reference signal in the LMS adaptive array. At the beginning of communication, we use training codes for the reference signal, which are known at a receiver. After the training period, we use detected codes for the reference signal. We can generate the reference signal modulating a carrier at the receiver by those codes. The carrier is oscillated independently of the incident signal. Then, the carrier frequency of the reference signal is in general different from that of the incident signal. However, the LMS adaptive array works in such a way that the carrier frequency of the array output coincides with that of the reference signal. Namely, the frequency difference does not affect the performance of the LMS adaptive array. Computer simulations show the proper behavior of the LMS adaptive array with the above reference signal generator. Moreover, we present a new multipath diversity technique using the LMS adaptive array. The LMS adaptive array reduces the frequency-selective fading by suppressing the multipath components. This means that the transmitted power is not used sufficiently. We propose a multiple beam antenna with the LMS adaptive array. Each antenna pattern receives one of the multipath components, and we combine them adjusting the timing. Then, we realize the multipath diversity. In addition to the multipath fading reduction, we can improve a signal-to-noise ratio by the diversity technique.

  • Applying Adaptive Credit Assignment Algorithm for the Learning Classifier System Based upon the Genetic Algorithm

    Shozo TOKINAGA  Andrew B. WHINSTON  

     
    PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    568-577

    This paper deals with an adaptive credit assignment algorithm to select strategies having higher capabilities in the learning classifier system (LCS) based upon the genetic algorithm (GA). We emulate a kind of prizes and incentives employed in the economies with imperfect information. The compensation scheme provides an automatic adjustment in response to the changes in the environment, and a comfortable guideline to incorporate the constraints. The learning process in the LCS based on the GA is realized by combining a pair of most capable strategies (called classifiers) represented as the production rules to replace another less capable strategy in the similar manner to the genetic operation on chromosomes in organisms. In the conventional scheme of the learning classifier system, the capability s(k, t) (called strength) of a strategy k at time t is measured by only the suitableness to sense and recognize the environment. But, we also define and utilize the prizes and incentives obtained by employing the strategy, so as to increase s(k, t) if the classifier provide good rules, and some amount is subtracted if the classifier k violate the constraints. The new algorithm is applied to the portfolio management. As the simulation result shows, the net return of the portfolio management system surpasses the average return obtained in the American securities market. The result of the illustrative example is compared to the same system composed of the neural networks, and related problems are discussed.

  • Analysis of Economics of Computer Backup Service

    Marshall FREIMER  Ushio SUMITA  Hsing K. CHENG  

     
    PAPER-Switching and Communication Processing

      Vol:
    E75-B No:5
      Page(s):
    385-400

    An organization may suffer large losses if its computer service is interrupted. For protection, it can purchase computer backup service from the outside market which temporarily provides service replacement from a central facility. A dynamic probabilistic model is developed which describes such a computer backup service system. The parties involved have conflicting motivations. The supplier is interested in optimizing his expected profits subject to a given set of parameters while the subscriber will evaluate the service contract to his own best interest. This paper analyzes how the economic interests of the supplier and subscribers interact based on a dynamic reliability analysis of their respective computer systems. Assuming all physical parameters fixed, the supplier's optimal value in terms of economic parameters is determined. An algorithmic procedure is developed for computing such values. Some numerical examples are presented in order to gain insights into the system.

  • Tag-Partitioned Join

    Jeong Uk KIM  Jae Moon LEE  Myunghwan KIM  

     
    PAPER-Databases

      Vol:
    E75-D No:3
      Page(s):
    291-297

    A tag-partitioned join algorithm is described. The algorithm partitions only one relation, while other partition-based algorithms partition both relations. It is performed as the joinable tuples of one relation are rearranged and some of them are duplicated according to the original sequence of the join attribute values of the other relation. To do this, the algorithm first finds the positions of all the tuples of the other relation which are joinable with each tuple of one relation, and then partitions joinable tuples of one relation into buckets by using the positions found. Final joining is performed on the partitioned relation and the other relation. We analyze and compare the performance of the algorithm with that of other partition-based join algorithms. The comparison shows that our method is better than other partition-based methods under the practical values of the analysis parameters.

2321-2340hit(2355hit)