The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] tin(3578hit)

3401-3420hit(3578hit)

  • The Emptiness Problem for Lexical-Functional Grammars is Undecidable

    Tetsuro NISHINO  

     
    LETTER-Automata, Languages and Theory of Computing

      Vol:
    E77-D No:5
      Page(s):
    597-600

    J. Bresnan and R. M. Kaplan introduced lexical-functional grammars (LFGs, for short) as a new formalism for human language syntax. It is important to show formal properties of this kind of grammars in order to characterize the formal complexity of human languages. In this paper, we will show that the emptiness problem for LFGs is undecidable.

  • Designing Efficient Geometric Search Algorithms Using Persistent Binary-Binary Search Trees

    Xuehou TAN  Tomio HIRATA  Yasuyoshi INAGAKI  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    601-607

    Persistent data structures, introduced by Sarnak and Tarjan, have been found especially useful in designing geometric algorithms. In this paper, we present a persistent form of binary-binary search tree, and then apply this data structure to solve various geometric searching problems, such as, three dimensional ray-shooting, hidden surface removal, polygonal point enclosure searching and so on. In all applications, we are able to either improve existing bounds or establish new bounds.

  • Stochastic Relaxation for Continuous Values--Standard Regularization Based on Gaussian MRF--

    Sadayuki HONGO  Isamu YOROIZAWA  

     
    PAPER-Regularization

      Vol:
    E77-D No:4
      Page(s):
    425-432

    We propose a fast computation method of stochastic relaxation for the continuous-valued Markov random field (MRF) whose energy function is represented in the quadratic form. In the case of regularization in visual information processing, the probability density function of a state transition can be transformed to a Gaussian function, therefore, the probablistic state transition is realized with Gaussian random numbers whose mean value and variance are calculated based on the condition of the input data and the neighborhood. Early visual information processing can be represented with a coupled MRF model which consists of continuity and discontinuity processes. Each of the continuity or discontinuity processes represents a visual property, which is like an intensity pattern, or a discontinuity of the continuity process. Since most of the energy function for early visual information processing can be represented by the quadratic form in the continuity process, the probability density of local computation variables in the continuity process is equivalent to the Gaussian function. If we use this characteristic, it is not necessary for the discrimination function computation to calculate the summation of the probabilities corresponding to all possible states, therefore, the computation load for the state transition is drastically decreased. Furthermore, if the continuous-valued discontinuity process is introduced, the MRF model can directly represent the strength of discontinuity. Moreover, the discrimination function of this energy function in the discontinuity process, which is linear, can also be calculated without probability summation. In this paper, a fast method for calculating the state transition probability for the continuous-valued MRF on the visual informtion processing is theoretically explained. Next, initial condition dependency, computation time and dependency on the statistical estimation of the condition are investigated in comparison with conventional methods using the examples of the data restoration for a corrupted square wave and a corrupted one-dimensional slice of a natural image.

  • On a Unified Synthesizing Approach for Cellular Neural Networks

    Chun-ying HO  Shinsaku MORI  

     
    PAPER-Network Synthesis

      Vol:
    E77-D No:4
      Page(s):
    433-442

    In this paper, we develop a unified synthesizing approach for the cloning templates of Cellular Neural Networks (CNNs). In particular, we shall consider the case when the signal processing problem is complex, and a multilayered CNN with time-variant templates is necessary. The method originates from the existence of correspondence between the cloning templates of Cellular Neural Network and its discrete counterpart, Discrete-Time Cellular Neural Network (DTCNN), in solving a prescribed image processing problem when time-variant templates are involved. Thus, one can start with calculating the cloning templates from DTCNN, and then translating the cloning templates to those for CNN operations. As a result, the mathematical tools being used in the synthesis of Discrete-time Cellular Neural Network can also be applied to the analog type Cellular Neural Network. This inevitably helps to simplify the design problem of CNN for signal processing. Examples akin to contour drawing and parallel thinning are shown to illustrate the merits of our proposed method.

  • Efficient Dynamic Fault Imaging by Fully Utilizing CAD Data in CAD-Linked Electron Beam Test System

    Koji NAKAMAE  Hirohisa TANAKA  Hideharu KUBOTA  Hiromu FUJITA  

     
    PAPER

      Vol:
    E77-C No:4
      Page(s):
    546-551

    A method to improve the efficiency of dynamic fault imaging (DFI) by fully utilizing the CAD data in the CAD-linked electron beam test system is proposed. In the method, in order to shorten the long acquisition time of the stroboscopic voltage contrast images over the whole area of the chip during the entire test cycle, only the area and phase (time) required for fault tracing are selected by utilizing the CAD data. Furthermore, image processing techniques are combined with the method to improve the efficiency of the DFI. In particular, the signal averaging technique is used in order to improve the signal-to-noise ratio in the stroboscopic images where all voltage information data on the equipotential electrode recognized by the CAD layout data are averaged. This enables us to reduce the acquisition time of images. Moreover, the experimental system is set up so that the image processing can be performed in parallel with the acquisition of the stroboscopic images. The proposed method is applied to part of a 2k-transistor block of a nonpassivated CMOS LSI where a marginal fault is detected. The result shows that the method is an efficient approach to the fully automatic fault diagnosis in the CAD-linked electron beam test system. The proposed method could improve the efficiency of the conventional DFI by a factor of more than 1000.

  • Hierarchical Properties of Realtime One-Way Alternating Multi-Stack-Counter Automata

    Tsunehiro YOSHINAGA  Katsushi INOUE  Itsuo TAKANAMI  

     
    PAPER

      Vol:
    E77-A No:4
      Page(s):
    621-629

    This paper investigates the accepting powers of one-way alternating multi-stack-counter automata (lamsca's) and one-way alternating multi-counter automata (lamsca's) which operate in realtime. For each k1, let 1ASCA (k, real) (1ACA(k, real)) denote the class of sets accepted by realtime one-way alternating k-stach-counter (k-counter) automata, and let 1USCA(k, real)(1UCA(k, real)) denote the class of sets accepted by realtime one-way alternating k-stack-counter (k-counter) automata with only universal states. We first investigate a relationship between the accepting powers of realtime lamsca's (lamca's) with only universal states, with only existential states, and with full alternation. We then investigate hierarchical properties based on the numbers of counters and stackcounters, and show, for example, that for each k1, 1USCA(k+1, real)-1ASCA(k, real)φ and 1UCA(k+1, real)-1ACA(k, real)φ. We finally investigate a relationship between the accepting powers of realtime lamsca's and lamca's, and show, for example, that there are no i and j such that 1UCA(i, real)=1USCA(j, real), and 1USCA(k, real)-1ACA(k, real)φ for each k1.

  • An Analysis of the Economics of the VLSI Development Including Test Cost

    Koji NAKAMAE  Homare SAKAMOTO  Hiromu FUJIOKA  

     
    PAPER-Computer Aided Design (CAD)

      Vol:
    E77-A No:4
      Page(s):
    698-705

    In order to evaluate the effect of testing technologies such as electron beam (EB) testing and focused ion beam (FIB) reconstruction on the VLSI development cycle, the VLSI development period and cost are analyzed by using detailed fault models which make possible to take into consideration the effect of EB and FIB techniques. First, the specifications of fabricated VLSIs and the VLSI development cycle are modeled. Next the faults which can be diagnosed by such testing techniques are modeled. By using the parametric model of the VLSI development cycle, the development period and cost are analyzed. In the fault diagnosis stage, the use of an EB tester or the combinational use of an EB tester and an FIB equipment, instead of a traditional mechanical prober is considered. It is seen that the development period and cost are reduced by using EB and FIB diagnosis equipments by a factor of about 3. The effect of scan path method is also evaluated by making use of the same simulation method. Results show that the scan path design is effective for the reduction in both period and cost in the development cycle.

  • A Hardware Implementation of a Neural Network Using the Parallel Propagated Targets Algorithm

    Anthony V. W. SMITH  Hiroshi SAKO  

     
    PAPER-Hardware

      Vol:
    E77-D No:4
      Page(s):
    516-527

    This document describes a proposal for the implementation of a new VLSI neural network technique called Parallel Propagated Targets (PPT). This technique differs from existing techniques because all layer, within a given network, can learn simultaneously and not sequentially as with the Back Propagation algorithm. the Parallel Propagated Target algorithm uses only information local to each layer and therefore there is no backward flow of information within the network. This allows a simplification in the system design and a reduction in the complexity of implementation, as well as acheiving greater efficiency in terms of computation. Since all synapses can be calculated simultaneously it is possible using the PPT neural algorithm, to parallelly compute all layers of a multi-layered network for the first time.

  • Matching of DUT Interconnection Pattern with CAD Layout in CAD-Linked Electron Beam Test System

    Koji NAKAMAE  Ryo NAKAGAKI  Katsuyoshi MIURA  Hiromu FUJIOKA  

     
    PAPER

      Vol:
    E77-C No:4
      Page(s):
    567-573

    Precise matching of the SEM (secondary electron microscope) image of the DUT (device under test) interconnection pattern with the CAD layout is required in the CAD-linked electron beam test system. We propose the point pattern matching method that utilizes a corner pattern in the CAD layout. In the method, a corner pattern which consists of a small number of pixels is derived by taking into account the design rules of VLSIs. By using the corner pattern as a template, the matching points of the template are sought in both the SEM image and CAD layout. Then, the point image obtained from the SEM image of DUT is matched with that from the CAD layout. Even if the number of points obtained in the DUT pattern is different from that in the CAD layout due to the influence of noise present in the SEM image of the DUT pattern, the point matching method would be successful. The method is applied to nonpassivated and passivated LSIs. Even for the passivated LSI where the contrast in the SEM image is mainly determined by voltage contrast, matching is successful. The computing time of the proposed method is found to be shortened by a factor of 4 to 10 compared with that in a conventional correlation coefficient method.

  • A Regularization Method for Neural Network Learning that Minimizes Estimation Error

    Miki YAMADA  

     
    PAPER-Regularization

      Vol:
    E77-D No:4
      Page(s):
    418-424

    A new regularization cost function for generalization in real-valued function learning is proposed. This cost function is derived from the maximum likelihood method using a modified sample distribution, and consists of a sum of square errors and a stabilizer which is a function of integrated square derivatives. Each of the regularization parameters which gives the minimum estimation error can be obtained uniquely and non-empirically. The parameters are not constants and change in value during learning. Numerical simulation shows that this cost function predicts the true error accurately and is effective in neural network learning.

  • E-Beam Static Fault Imaging with a CAD Interface and Its Application to Marginal Fault Diagnosis

    Norio KUJI  Kiyoshi MATSUMOTO  

     
    PAPER

      Vol:
    E77-C No:4
      Page(s):
    552-559

    A new image-based diagnostic method is proposed for use with an E-beam tester. The method features a static fault imaging technique and a navigation map for fault tracing. Static Fault imaging with a dc E-beam enables the fast acquisition of images without any additional hardware. Then, guided by the navigation map derived from CAD data, marginal timing faults can be easily pinpointed. A statistical estimation of the average count of static fault images for various LSI circuits shows that the proposed method can diagnose marginal faults by observing less than thirty faulty images and that a faulty area can be localized with up to five times fewer observations than with the guided-probe method. The proposed method was applied to a 19k-gate CMOS-logic LSI circuit and a marginal timing fault was successfully located.

  • LSI Failure Analysis with CAD-Linked Electron Beam Test System and Its Cost Evaluation

    Hiromu FUJIOKA  Koji NAKAMAE  

     
    INVITED PAPER

      Vol:
    E77-C No:4
      Page(s):
    535-545

    Following a discussion of various testing methods used in the electron beam (EB) test system, new waveform-based and image-based approaches in the CAD-linked electron beam (EB) test system are proposed. A waveform-based automatic tracing algorithm of the transistor-level performance faults is first discussed. Then, the method to improve the efficiency of an image-based method called dynamic fault imaging (DFI) by fully utilizing the CAD data is described. Third, the VLSI development cost is analyzed by using the fault models that make possible to take into consideration the effect of new testing technologies such as EB testing and focused ion beam (FIB) microfabrication. Finally, the future prospects are discussed.

  • A Stochastic Parallel Algorithm for Supervised Learning in Neural Networks

    Abhijit S. PANDYA  Kutalapatata P. VENUGOPAL  

     
    PAPER-Learning

      Vol:
    E77-D No:4
      Page(s):
    376-384

    The Alopex algorithm is presented as a universal learning algorithm for neural networks. Alopex is a stochastic parallel process which has been previously applied in the theory of perception. It has also been applied to several nonlinear optimization problems such as the Travelling Salesman Problem. It estimates the weight changes by using only a scalar cost function which is measure of global performance. In this paper we describe the use of Alopex algorithm for solving nonlinear learning tasks by multilayer feed-forward networks. Alopex has several advantages such as, ability to escape from local minima, rapid algorithmic computation based on a scalar cost function and synchronous updation of weights. We present the results of computer simulations for several tasks, such as learning of parity, encoder problems and the MONK's problems. The learning performance as well as the generalization capacity of the Alopex algorithm are compared with those of the backpropagation procedure, and it is shown that the Alopex has specific advantages over backpropagation. An important advantage of the Alopex algorithm is its ability to extract information from noisy data. We investigate the efficacy of the algorithm for faster convergence by considering different error functions. We show that an information theoretic error measure shows better convergence characteristics. The algorithm has also been applied to more complex practical problems such as undersea target recognition from sonar returns and adaptive control of dynamical systems and the results are discussed.

  • Cone/Block Methods for Logic Simulation Time Reduction in E-Beam Guided-Probe Diagnosis

    Norio KUJI  Kazuhiro SHIRAKAWA  

     
    PAPER

      Vol:
    E77-C No:4
      Page(s):
    560-566

    Cone and Block methods that sharply reduce logic simulation time in E-beam guided-probe diagnosis are proposed. These methods are based on a primitive-cell-level tracing algorithm, which traces faulty-state cells one by one in the primitive-cell level. By executing logic simulations in these methods so that simulated responses are reported only for the small set of nodes in a tracing path and in the immediate vicinity, simulation CPU time is sharply reduced with state-of-the-art logic simulators such as the Verilog-XL. With the proposed methods, the total CPU time in a diagnostic process can be reduced to 1/700 that of a conventional method. Additionally, the total amount of simulation date also reduces to 1/40 of its original amount. These methods were applied to the guided-probe diagnosis of actual 110k-gate ASIC chips and it was verified that they could be diagnosed in under seven hours per device, which is practical. This technology will greatly contribute to shortening the turnaround time of ASIC development.

  • A Neurocomputational Approach to the Correspondence Problem in Computer Vision

    Hiroshi SAKO  Hadar Itzhak AVI-ITZHAK  

     
    PAPER-Image Processing

      Vol:
    E77-D No:4
      Page(s):
    507-515

    A problem which often arises in computer vision is that of matching corresponding points of images. In the case of object recognition, for example, the computer compares new images to templates from a library of known objects. A common way to perform this comparison is to extract feature points from the images and compare these points with the template points. Another common example is the case of motion detection, where feature points of a video image are compared to those of the previous frame. Note that in both of these example, the point correspondence is complicated by the fact that the point sets are not only randomly ordered but have also been distorted by an unknown transformation and having quite different coordinates. In the case of object recognition, there exists a transformation from the object being viewed, to its projection onto the camera's imaging plane, while in the motion detection case, this transformation represents the motion (translation and rotation) of the ofject. If the parameters of the transformation are completely unknow, then all n! permutations must be compared (n : number of feature points). For each permutation, the ensuing transformation is computed using the least-squared projection method. The exponentially large computation required for this is prohibitive. A neural computational method is propopsed to solve these combinatorial problems. This method obtains the best correspondence matching and also finds the associated transform parameters. The method was applied to two dimensional point correspondence and three-to-two dimensional correspondence. Finally, this connectionist approach extends readily to a Boltzmann machine implementation. This implementation is desirable when the transformation is unknown, as it is less sensitive to local minima regardless of initial conditions.

  • An Optimal Time for Software Testing under the User's Requirement of Failure-Free Demonstration before Release

    Byung Chul CHO  Kyung Soo PARK  

     
    PAPER-Reliability, Availability and Vulnerability

      Vol:
    E77-A No:3
      Page(s):
    563-570

    A new approach to the problem of optimal software testing time is described. Most models implicitly assume the testing is terminated at the end of a prescribed period of time without user's approval. It means the release time and the in-service reliability are determined unilaterally by the developer. If software developer uses and maintains it, the assumption is appropriate. But, it may be inappropriate, if a software requiring more stringent reliability is developed by second party on a contract basis. In this case, the time of release is usually determined with the user's approval. To overcome the weaknesses of the assumption, a two stage testing with failure-free release policy is proposed. A software, after being tested by the developer for some time (in-house testing), is transferred to acceptance testing performed jointly with the user. During the acceptance testing, it is released when τ units of time specified by user is observed to be failure-free for the first time. The policy may be attractive to a user because he can determine the time of release, and extend the testing time by increasing τ. A software cost model for the policy is developed. For the software developer, an optimal in-house testing time minimizing software cost, and various quantities of interests, such as expected periods of acceptance testing, are derived based on the Jelinski-Moranda software reliability model. Finally, numerical examples are shown to illustrate the results.

  • Genetic Channel Router

    Xingzhao LIU  Akio SAKAMOTO  Takashi SHIMAMOTO  

     
    PAPER-Computer Aided Design (CAD)

      Vol:
    E77-A No:3
      Page(s):
    492-501

    Genetic algorithms have been shown to be very useful in a variety of search and optimization problems. In this paper, we describe the implementation of genetic algorithms for channel routing problems and identify the key points which are essential to making full use of the population of potential solutions, that is one of the characteristics of genetic algorithms. Three efficient crossover techniques which can be divided further into 13 kinds of crossover operators have been compared. We also extend our previous work with ability to deal with dogleg case by simply splitting multi-terminal nets into a series of 2-terminal subnets. It routes the Deutsch's difficult example with 21 tracks without any detours.

  • Automatic Tracing of Transistor-Level Performance Faults with CAD-Linked Electron Beam Test System

    Katsuyoshi MIURA  Koji NAKAMAE  Hiromu FUJIOKA  

     
    PAPER-Computer Aided Design (CAD)

      Vol:
    E77-A No:3
      Page(s):
    539-545

    An automatic tracing algorithm of the transistor-level performance faults in the waveform-based approach with CAD-linked electron beam test system which utilizes a transistor-level circuit data in CAD database is proposed. Performance faults mean some performance measure such as the temporal parameters (rise time, fall time and so on) lies outside of the specified range in a VLSI. Problems on automatic fault tracing in the transistor level are modeled by using graphs. Combinational circuits which consist of MOS transistors are considered. A single fault is assumed to be in a circuit. The algorithm utilizes Depth-First Search algorithm where faulty upstream interconnections are searched as deeply as possible. Treatment of the faults on downstream interconnections and on unmeasurable interconnections is given. Application of this algorithm to the 2k-transistor block of a CMOS circuit showed its validity in the simulation.

  • Datagram Delivery in an ATM-Internet

    Hiroshi ESAKI  Yoshiyuki TSUDA  Takeshi SAITO  Shigeyasu NATSUBORI  

     
    PAPER

      Vol:
    E77-B No:3
      Page(s):
    314-326

    This paper proposes a datagram delivery (class D service) architecture in an ATM-Internet, which is the network interconnecting ATM-LANs through the IWUs, Inter-Working Unit. We can provide a fast datagram delivery system through the following techniques. The datagram delivery to the destination terminal is performed by the datagram delivery server, so called CLS, which is located in the ATM-LAN where the destination terminal belongs to. Each CLS only manages the addresses for the terminals belonging to the corresponding ATM-LAN. The cells belonging to a certain datagram are transferred through a single (seamless) ATM connection from the source terminal to the CLS in the ATM-LAN where the destination terminal belongs to. The source terminal only resolves the access point address corresponding to the ATM-LAN where the destination terminal belongs to, when it submits the cells to the network to transfer the datagram to the corresponding destination terminal. The proposed datagram delivery architecture can be applied to the ATM-LAN system based on VPI routing architecture, easily. The number of the required ATM connections so as to provide datagram delivery through the proposed architecture is less than 1.0% of the ATM connections that the ATM-Internet can provide. Also, the required address space at UNI to provide datagram delivery are less than 1.0% of the UNI address space which is available to be used as an ATM connection identifier.

  • A Continued Fraction Expansion and the Onset of Chaos

    Mitsuo KONO  

     
    PAPER-Nonlinear Phenomena and Analysis

      Vol:
    E77-A No:2
      Page(s):
    417-421

    An analytical method is developed to determine the critical value of the control parameter of a dynamical system above which chaos is initiated. An initial value problem for a dynamical system is shown to be solved with the aid of a continued fraction expansion which converges very rapidly. The result is confirmed by numerical experiments.

3401-3420hit(3578hit)