1-12hit |
Hirofumi HAMAMURA Hiroaki KOMATSU
This paper describes special-purpose hardware for large-scale logic simulation, called SP2, which executes an event driven algorithm and can simulate up to sixteen million gates. SP2 was developed, in 1992, for system verification of large-scale computer designs as a successor to SP1, which was developed in 1987. SP2 provides enhanced performance, throughput, and delay accuracy over SP1. Since 1992, SP2 has been widely used for system-level simulation of mainframes, super computers, UNIX servers and microprocessors. It is used as a powerful simulator, in all stages of design verification, or in early stages, before regression testing, by using emulators.
Hafiz Md. HASAN BABU Tsutomu SASAO
This paper proposes a method to construct smaller binary decision diagrams for characteristic functions (BDDs for CFs). A BDD for CF represents an n-input m-output function, and evaluates all the outputs in O(n+m) time. We derive an upper bound on the number of nodes of the BDD for CF of n-bit adders (adrn). We also compare complexities of BDDs for CFs with those of shared binary decision diagrams (SBDDs) and multi-terminal binary decision diagrams (MTBDDs). Our experimental results show: 1) BDDs for CFs are usually much smaller than MTBDDs; 2) for adrn and for some benchmark circuits, BDDs for CFs are the smallest among the three types of BDDs; and 3) the proposed method often produces smaller BDDs for CFs than an existing method.
This paper presents a new approach to simulation of Dynamically Reconfigurable Logic (DRL) systems, which offers better accuracy of modelling dynamic reconfiguration than previously reported techniques. Our method, named Clock Morphing (CM), is based on modelling dynamic reconfiguration via a reconfigured module clock signal, while using a dedicated signal value to indicate dynamic reconfiguration. We discuss problems associated with the other approaches to DRL simulation and describe the main principles behind the proposed technique. We further demonstrate feasibility of a CM DRL simulation on its example implementation in VHDL.
Hafiz Md. HASAN BABU Tsutomu SASAO
This paper describes a method to represent m output functions using shared multi-terminal binary decision diagrams (SMTBDDs). The SMTBDD(k) consists of multi-terminal binary decision diagrams (MTBDDs), where each MTBDD represents k output functions. An SMTBDD(k) is the generalization of shared binary decision diagrams (SBDDs) and MTBDDs: for k=1, it is an SBDD, and for k=m, it is an MTBDD. The size of a BDD is the total number of nodes. The features of SMTBDD(k)s are: 1) they are often smaller than SBDDs or MTBDDs; and 2) they evaluate k outputs simultaneously. We also propose an algorithm for grouping output functions to reduce the size of SMTBDD(k)s. Experimental results show the compactness of SMTBDD(k)s. An SMTBDDmin denotes the smaller SMTBDD which is either an SMTBDD(2) or an SMTBDD(3) with fewer nodes. The average relative sizes for SBDDs, MTBDDs, and SMTBDDs are 1. 00, 152. 73, and 0. 80, respectively.
A novel method for the guided-probe diagnosis of high-performance LSIs containing macrocells, which have no internal netlist essential to the diagnosis, has been developed. In this method, the macrocell netlist is derived from its layout by extracting a leaf-cell-level netlist and is combined with the original one. Logic models for the leaf cells in the extracted netlist are also generated to obtain the logic-simulation data in the macrocells. The logic modeling is extended for application to memory macrocells, based on the idea that analog-behavior leaf cells in the memory macrocells are converted into logically equivalent circuits for logic simulation. Specifically, sense amplifiers and wired-or connections on bit lines are replaced with the corresponding logic-behavior models. The proposed method has been successfully applied to actual design data of LSIs containing macrocells, and it has been verified that it enables fault paths inside macrocells to be accurately traced and that the logic models give good timing resolution in the logic simulation. Using the proposed method, LSIs containing macrocells will be able to be diagnosed regardless of the macrocell types, without the need for a "golden" device, by an electron-beam guided probe system.
Yukihiro IGUCHI Tsutomu SASAO Munehiro MATSUURA
Three types of ternary decision diagrams (TDDs) are considered: AND -TDDs, EXOR-TDDs, and Kleene-TDDs. Kleene-TDDs are useful for logic simulation in the presence of unknown inputs. Let N(BDD:f), N(AND-TDD:f), and N(EXOR-TDD:f) be the number of non-terminal nodes in the BDD, the AND-TDD, and the EXOR-TDD for f, respectively. Let N(Kleene-TDD:) be the number of non-terminal nodes in the Kleene -TDD for , where is the regular ternary function corresponding to f. Then N(BDD:f) N(TDD:f). For parity functions, N(BDD:f)=N(AND-TDD:f)=N(EXOR-TDD:f)=N(Kleene-TDD:). For unate functions,N(BDD:f)=N(AND-TDD:f). The sizes of Kleene-TDDs are O(3n/n), and O(n3) for arbitrary functions, and symmetric functions, respectively. There exist a 2n-variable function, where Kleene-TDDs require O(n) nodes with the best order, while O(3n) nodes in the worst order.
A CAD-based faulty portion diagnosis technique for CMOS-LSI with single fault using abnormal Iddq has been developed to indicate the presence of physical damage in a circuit. This method of progressively reducing the faulty portion, works by extracting the inner logic state of each block from logic simulation, and by deriving test vector numbers with abnomal Iddq. To easily perform fault diagnosis, the hierarchical circuit structure is divided into primitive blocks including simple logic gates. The diagnosis technique employs the comparative operation of each primitive block to determine whether one and the same inner logic state with abnormal Iddq exists in the inner logic state with normal Iddq or not. The former block is regarded as normal block and the latter block is regarded as faulty block. Faulty portion of the faulty block can be localized easily by using input logic state simulation. Experimental results on real faulty LSI with 100k gates demonstrated rapid diagnosis times of within ten hours and reliable extraction of the faulty portion.
We discuss a processor scheduling problem for parallel logic simulation of combinational circuits. In the processor scheduling problem, to be discussed in this paper, for logic simulation using time–first method, the time needed for each gate evaluation is not given beforehand, and is not constant. This feature distinguishes the processor scheduling problem from typical task scheduling problems. First, we devise newly Algorithm MET to solve the processor scheduling problem. The key idea of Algorithm MET is to determine processor scheduling incrementally and dynamically. Then, experimental evaluations using well–known twelve benchmark combinational circuits show the usefulness of Algorithm, MET, compared with conventional static algorithms. We believe that this is a first step to implement parallel logic simulation of combinational circuits.
Cone and Block methods that sharply reduce logic simulation time in E-beam guided-probe diagnosis are proposed. These methods are based on a primitive-cell-level tracing algorithm, which traces faulty-state cells one by one in the primitive-cell level. By executing logic simulations in these methods so that simulated responses are reported only for the small set of nodes in a tracing path and in the immediate vicinity, simulation CPU time is sharply reduced with state-of-the-art logic simulators such as the Verilog-XL. With the proposed methods, the total CPU time in a diagnostic process can be reduced to 1/700 that of a conventional method. Additionally, the total amount of simulation date also reduces to 1/40 of its original amount. These methods were applied to the guided-probe diagnosis of actual 110k-gate ASIC chips and it was verified that they could be diagnosed in under seven hours per device, which is practical. This technology will greatly contribute to shortening the turnaround time of ASIC development.
Nasahiro TOMITA Naoaki SUGANUMA Kotaro HIRANO
This paper presents a Reconfigurable Machine (RM). capable of efficiently implementing a wide range of computationlly complex algorithms. Its highly flexble architecture combining FPGA's with RAM's supports a wide range of applications. Since its "gate-level programmability" allows us to implement various kinds of parallel processing techniques, RM provides a perfomance comparable to exising "special-purpose" engines. The in-circuit reconfiguration capability of FPGA's is used to reload several kinds of configuration data during power on. Thus, RM behaves itself like a general-purpose computer applicable to various kinds of applications by loading programs. A Reconfigurable Machine-(RM-) has been built as the first prototype incorporating five FPGA's and four SRAM memory banks. RM- has been applied to a multiple-delay Logic Simulator (LSIM). Employing pipeline architecture, LSIM has achieved a perfomance of l million gate events per second at 4MHz. The concept of RM is the best solution to the trade-offs between general-purpose machines and special-purpose ones. RM will be a hardware platform accelerating a wide range of applications, also offering an interesting problem in high-level synthesis.
The design of complex VLSI systems relies more and more heavily on scientific computing for numerical simulation and configuration/performance optimization. Especially, computer simulation is becoming a component of VLSI design methodology, for which a variety of computation evolutions have been accomplished for the past two decades. There are many different forms of simulation which are used for verification of VLSI design at various stages of the whole design process. They may be classified into functional or behavioral simulators, register transfer level (RTL) simulators, gate-level logic, or simply logic, simulators, timing simulator, circuit simulators, device simulator, and process simulators. Among these simulation tasks, a series of logic, timing, and circuit simulation is most strongly related to the design stage which deals with logic/electric waveform performance of VLSI circuits. This article surveys the state of the art of VLSI simulation, putting stress mainly on the domain of logic, timing, and circuit simulation, since the reader of the Transactions may be interested exclusively in this field.
Shinji KIMURA Shigemi KASHIMA Hiromasa HANEDA
The paper proposes a combined delay model to manipulate the variance of the delay time of logic elements and a new timing verification method based on the theory of regular expressions. With the delay time of logic elements such as TTL SN7400, the minimum delay time (dm), the maximum delay time (dM), and the typical delay time are specified in the manual, and the delay time of an element is one in the interval between dm and dM. Here we assume a discrete time, and we manipulate the variance of the delay time as a set of output strings corresponding to each delay time. We call the model as the combined delay model. Since many output strings are generated with a single input string, the usual timing simulation method cannot be applied. We propose a timing verification method using a behavior extraction method of logic circuits with respect to a time string set: with respect to the specified input set, the method extracts the output string set of each element in the circuit. We devised (1) a mechanism to keep the correspondence between a primary input string and an output string with respect to the primary input string, (2) a mechanism to manipulate the nondeterminism included in the combined delay model, and (3) an event-driven like data compaction method in representing finite automata. We focused on the hazard detection problem and the verification of asynchronous circuits, and show the effectiveness of our method with medium sized circuit with 100 elements or so. The method includes the state explosion, but the data compaction method and the extraction for only the specified input set are useful to control the state explosion.