The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] RIF(311hit)

261-280hit(311hit)

  • Protocol Verification Tool with Extended Petri Net and Horn Clause

    Takashi WATANABE  Tsuyoshi OHTA  Fumiaki SATO  Tadanori MIZUNO  

     
    PAPER

      Vol:
    E78-A No:11
      Page(s):
    1458-1467

    This paper proposes a protocol verification tool where protocols are described in an extended Petri net and Horn clauses. The extended net model contributes to reduce state space in verification with hierarchical description. The model also includes timed and colored net. Horn clause enables protocol designers to grasp a protocol by the declarative semantics. They can describe non critical but mandatory portion of a protocol like error processing or abortion with Horn clauses. Protocols are verified through simulation. Protocol verification includes two methods, all-in-one and hierarchical methods. By the all-in-one method all description is translated into Prolong clause and simulated exhaustively, whereas by the hierarchical verification, simulation begins with the lowest layer and deduces sufficient conditions that give liveness and safeness of the net model. Then the layer is replaced by a simpler net model that is incorporated into the higher layer. The scheme is applied to an illustrative example of the Alternating Bit protocol to discuss its effectiveness.

  • Verification and Refinement for System Requirements

    Kukhwan SONG  Atushi TOGASHI  Norio SHIRATORI  

     
    PAPER

      Vol:
    E78-A No:11
      Page(s):
    1468-1478

    Due to the large and complex information processing systems, formal description methods are needed for specification of systems and their efficient and reliable designs. During the early stage of system design, it is often necessary to modify or change system requirements which may influence the whole system design. We have proposed a new flexible description methodology, which copes with the modifications or changes in the system requirements, in order to obtain the formal specification of the system. We have also shown that function requirements can be modeled by a Logical Petri Net (LPN), which is a kind of extended Petri Nets, in order to derive the formal specification. In this paper, we propose a verification method of system requirements that contain some kinds of logical errors. Further, we show a method to decompose and refine a requirement description hierarchically, and discuss how to derive a formal specification from a requirement description flexibly along our refinement method against the changes of the requirement description in the system.

  • On Symbolic Model Checking in Petri Nets

    Kunihiko HIRAISHI  Minoru NAKANO  

     
    PAPER

      Vol:
    E78-A No:11
      Page(s):
    1479-1486

    The symbolic model checking algorithm was proposed for the efficient verification of sequential circuits. In this paper, we show that this algorithm is applicable to the verification of concurrent systems described by finite capacity Petri nets. In this algorithm, specifications of the system are given in the form of temporal logic formulas, and the algorithm checks whether these formulas hold in the state space. All logical operations are performed on Binary Decision Diagrams. Since the algorithm does not enumerating each state, the problem of state space explosion can be avoided in many cases.

  • Practical Program Validation for State-Based Reactive Concurrent Systems--Harmonization of Simulation and Verification--

    Naoshi UCHIHIRA  Hideji KAWATA  

     
    PAPER

      Vol:
    E78-A No:11
      Page(s):
    1487-1497

    This paper proposed a practical method of program validation for state-based reactive concurrent systems. The proposed method is of particular relevance to plant control systems. Plant control systems can be represented by extended state transition systems (e.g., communicating asynchronous transition systems). Our validation method is based on state space analysis. Since naive state space analysis causes the state explosion problem, techniques to ease state explosion are necessary. One of the most promising techniques is the partial order method. However, these techniques usually require some structural assumptions and they are not always effective for actual control systems. Therefore, we claim integration and harmonization of verification (i.e., state space analysis based on the partial order method) and simulation (i.e., conventional validation technique). In the proposed method, verification is modeled as exhaustive simulation over the state space, and two types of simulation management techniques are introduced. One is logical selection (pruning) based on the partial order method. The other is heuristic selection based on priority (a priori precedence) specified by the user. In order to harmonize verification (logical selection) and conventional simulation (heuristic selection), we propose a new logical selection mechanism (the default priority method). The default priority method which prunes redundant state generation based on default priority is in harmony with heuristic selection based on the user's priority. We have implemented a practical validation tool, Simulation And Verification Environment for Reactive Concurrent Systems (SAVE/RCS), and applied it to chemical plant control systems.

  • Using Process Algebras for the Semantic Analysis of Data Flow Networks

    Cinzia BERNARDESCHI  Andrea BONDAVALLI  Luca SIMONCINI  

     
    PAPER-Computer Systems

      Vol:
    E78-D No:8
      Page(s):
    959-968

    Data flow is a paradigm for concurrent computations in which a collection of parallel processes communicate asynchronously. For nondeterministic data flow networks many semantic models have been defined, however, it is complex to reason about the semantics of a network. In this paper, we introduce a transformation between data flow networks and the LOTOS specification language to make available theories and tools developed for process algebras for the semantic analysis based on traces of the networks. The transformation does not establish a one-to-one mapping between the traces of a data flow network and the LOTOS specification, but maps each network in a specification which usually contains more traces. The obtained system specification has the same set of traces as the corresponding network if they are finite, otherwise also non fair traces are included. Formal analysis and verification methods can still be applied to prove properties of the original data flow network, allowing in case of networks with finite traces to prove also network equivalence.

  • Temporal Verification of Real-Time Systems

    Sérgio V. CAMPOS  Edmund M. CLARKE  Wilfredo MARRERO  Marius MINEA  Hiromi HIRAISHI  

     
    PAPER

      Vol:
    E78-D No:7
      Page(s):
    796-801

    This paper presents a general method for computing quantitative information about finite-state real-time systems. We have developed algorithms that compute exact bounds on the delay between two specified events and on the number of occurrences of an event in a given interval. This technique allows us to determine performance measures such as schedulability, response time, and system load. Our algorithms produce more detailed information than traditional methods. This information leads to a better understanding of system behavior, in addition to determining its correctness. The algorithms presented in this paper are efficiently implemented using binary decision diagrams and have been incorporated into the SMV symbolic model checker. Using this method, we have verified a model of an aircraft control system with 1015 states. The results obtained demonstrate that our method can be successfully applied in the verification of real-time system designs.

  • Towards Verification of Bit-Slice Circuits--Time-Space Modal Model Checking Approach--

    Hiromi HIRAISHI  

     
    PAPER

      Vol:
    E78-D No:7
      Page(s):
    791-795

    The goal of this paper is to propose a new symbolic model checking approach named time-space modal model checking, which could be applicable to verification of bit-slice microprocessor of infinite bit width and one dimensional systolic array of infinite length. A simple benchmark result shows the effectiveness of the proposed approach.

  • A 65 ns 3 V-only NAND-Flash Memory with New Verify Scheme and Folded Bit-Line Architecture

    Hiromi NOBUKATA  Kenichi SATORI  Shinji HIRAMATSU  Hideki ARAKAWA  

     
    PAPER

      Vol:
    E78-C No:7
      Page(s):
    818-824

    An experimental 3 V-only 4 Mb NAND Flash memory with 65 ns access time has been developed using a new charge pump circuit and novel circuit techniques such as folded bit-line architecture. By adopting a new program verify technique, programming time is reduced to 11 µs/Byte.

  • A Formal Verification Algorithm for Pipelined Processors

    Toru SHONAI  Tsuguo SHIMIZU  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E78-A No:5
      Page(s):
    618-631

    We describe a formal verification algorithm for pipelined processors. This algorithm proves the equivalence between a processor's design and its specifications by using rewriting of recursive functions and a new type of mathematical induction: extended recursive induction. After the user indicates only selectors in the design, this algorithm can automatically prove processors having more than 10(1010) states. The algorithm is manuary applied to benchmark processors with pipelined control, and we discuss how data width, memory size, and the numbers of pipeline stages and instructions influence the computation cost of proving the correctness of the processors. Further, this algorithm can be used to generate a pipeline invariant.

  • Nonlocal Impact Ionization Model and Its Application to Substrate Current Simulation of n-MOSFET's

    Ken-ichiro SONODA  Mitsuru YAMAJI  Kenji TANIGUCHI  Chihiro HAMAGUCHI  Tatsuya KUNIKIYO  

     
    PAPER

      Vol:
    E78-C No:3
      Page(s):
    274-280

    We propose a nonlocal impact ionization model applicable for the drain region where electric field increases exponentially. It is expressed as a function of an electric field and a characteristic length which is determined by a thickness of gate oxide and a source/drain junction depth. An analytical substrate current model for n-MOSFET is also derived from the new nonlocal impact ionization model. The model well explains the reason why the theoretical characteristic length differs from empirical expressions used in a pseudo two-dimensional model for MOSFET's. The nonlocal impact ionization model implemented in a device simulator demonstrates that the new model can predict substrate current correctly in the framework of drift-diffusion model.

  • Reduced State Space Generation of Concurrent Systems Using Weak Persistency

    kunihiko HIRAISHI  

     
    PAPER

      Vol:
    E77-A No:10
      Page(s):
    1602-1606

    State space explosion is a serious problem in analyzing discrete event systems that allow concurrent occurring of events. A new method is proposed for generating reduced state spaces of systems. This method is an improvement of Valmari's stubborn set method. The generated state space preserves liveness, livelocks, and terminal states of the ordinary state space. Petri nets are used as a model of systems, and a method is shown for generating a reduced state space from a given Petri net.

  • Automatic Seal Imprint Verification System with Imprint Quality Assessment Function and Its Performance Evaluation

    Katsuhiko UEDA  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:8
      Page(s):
    885-894

    An annoying problem encountered in automatic seal imprint verification is that for seal imprints may have a lot of variations, even if they are all produced from a single seal. This paper proposes a new automatic seal imprint verification system which adds an imprint quality assessment function to our previous system in order to solve this problem, and also examines the verification performance of this system experimentally. This system consists of an imprint quality assessment process and a verification process. In the imprint quality assessment process, an examined imprint is first divided into partial regions. Each partial region is classified into one of three quality classes (good quality region, poor quality region, and background) on the basis of characteristics of its gray level histogram. In the verification process, only good quality partial regions of an examined imprint are verified with registered one. Finally, the examined imprint is classified as one of two types: a genuine and a forgery. However, as a result of quality assessment, if the partial regions classified as poor quality are too many, the examined imprint is classified as ambiguous" without verification processing. A major advantage of this verification system is that this system can verify seal imprints of various qualities efficiently and accurately. Computer experiments with real seal imprints were performed by using this system, previous system (without image quality assessment function) and document examiners of a bank. The results of these experiments show that this system is superior in the verification performance to our previous system, and has a similar verification performance to that of document examiners (i.e., the experimental results show the effectiveness of adding the image quality assessment function to a seal imprint verification system).

  • A Motion Compensation Technique for Down-Scaled Pictures in Layered Coding

    Masahiro IWAHASHI  Wataru KAMEYAMA  Koichi OHYAMA  Noriyoshi KAMBAYASHI  

     
    PAPER-Signaling System and Communication Protocol

      Vol:
    E77-B No:8
      Page(s):
    1007-1012

    This paper propeses a new motion compensation (MC) technique which reduces blurring called drift in moving pictures down-scaled with layered coding system. Encoder of the system compresses large amounts of digital video data in the same way of MPEG (Moving Picture Experts Group) algorithm. Decoder, on the other hand, expands a part of the compressed data and reconstructs down scaled pictures. The purpose of this paper is to reduce blurring which is observed in the reconstructed pictures. In this paper, cause of the blurring is analyzed and the method is introduced as a solution to the problem. The new method is implemented by a little modification of motion compensation (MC) of the decoder, namely increasing the number of tap of interpolation fillters of the MC. Compressing moving pictures, its effectiveness is also confirmed by means of not only subjective test but also signal to noise ratio.

  • An Approach to Integrated Pen Interface for Japanese Text Entry

    Kazuharu TOYOKAWA  Kozo KITAMURA  Shin KATOH  Hiroshi KANEKO  Nobuyasu ITOH  Masayuki FUJITA  

     
    PAPER

      Vol:
    E77-D No:7
      Page(s):
    817-824

    An integrated pen interface system was developed to allow effective Japanese text entry. It consists of sub-systems for handwriting recognition, contextual post-processing, and enhanced Kana-to-Kanji conversion. The recognition sub-system uses a hybrid algorithm consisting of a pattern matcher and a neural network discriminator. Special care was taken to improve the recognition of non-Kanji and simple Kanji characters frequently used in fast data entry. The post-processor predicts consecutive characters on the basis of bigrams modified by the addition of parts of speech and substitution of macro characters for Kanji characters. A Kana-to Kanji conversion method designed for ease of use with a pen interface has also been integrated into the system. In an experiment in which 2,900 samples of Kanji and non-Kanji characters were obtained from 20 subjects, it was observed that the original recognition accuracy of 83.7% (the result obtained by using the pattern matching recognizer) was improved to 90.7% by adding the neural network discriminator, and that it was further improved to 94.4% by adding the post-processor. The improved recognition accuracy for non-Kanji characters was particularly marked.

  • Pattern Generation for Locating Logic Design Errors

    Masahiro TOMITA  Naoaki SUGANUMA  Kotaro HIRANO  

     
    PAPER-Computer Aided Design (CAD)

      Vol:
    E77-A No:5
      Page(s):
    881-893

    This paper presents techniques for generating the input patterns for locating logic design errors (PLE's) by Boolean function manipulation based on binary decision diagrams (BDD's). One PLE has one Boolean variable X or and constant values. A primary output of a correct circuit takes value X, while the designed circuit takes either 0 or 1. By using PLE's, the X-algorithms locate single or multiple logic design errors in a combinational circuit. Although PLE's play the most important role in the X-algorithms, the condition under which PLE's exist has not been formalized. This paper gives a formal analysis on the existence condition of PLE's. It is shown that the condition is always satisfied by incorporating another type of PLE. From the condition, an implicit representation of PLE's is derived. In addition, two kinds of approaches are presented for generating PLE's by Boolean function manipulation based on BDD's. One is an approach for generating all the existing PLE's. The other is a heuristic approach to obtain a limited number of PLE's in a short time. Both approaches generate PLE's including don't cares. Incorporating them, a compact representation of PLE is achieved. Experimental results have shown the compactness of the proposed representations and the availability of the pattern generation techniques.

  • An Analysis of the Economics of the VLSI Development Including Test Cost

    Koji NAKAMAE  Homare SAKAMOTO  Hiromu FUJIOKA  

     
    PAPER-Computer Aided Design (CAD)

      Vol:
    E77-A No:4
      Page(s):
    698-705

    In order to evaluate the effect of testing technologies such as electron beam (EB) testing and focused ion beam (FIB) reconstruction on the VLSI development cycle, the VLSI development period and cost are analyzed by using detailed fault models which make possible to take into consideration the effect of EB and FIB techniques. First, the specifications of fabricated VLSIs and the VLSI development cycle are modeled. Next the faults which can be diagnosed by such testing techniques are modeled. By using the parametric model of the VLSI development cycle, the development period and cost are analyzed. In the fault diagnosis stage, the use of an EB tester or the combinational use of an EB tester and an FIB equipment, instead of a traditional mechanical prober is considered. It is seen that the development period and cost are reduced by using EB and FIB diagnosis equipments by a factor of about 3. The effect of scan path method is also evaluated by making use of the same simulation method. Results show that the scan path design is effective for the reduction in both period and cost in the development cycle.

  • Japanese Sentence Generation Grammar Based on the Pragmatic Constraints

    Kyoko KAI  Yuko DEN  Yasuharu DEN  Mika OBA  Jun-ichi NAKAMURA  Sho YOSHIDA  

     
    PAPER

      Vol:
    E77-D No:2
      Page(s):
    181-191

    Naturalness of expressions reflects various pragmatic factors in addition to grammatical factors. In this paper, we discuss relations between expressions and two pragmatic factors: a point fo view of speaker and a hierarchical relation among participants. Degree of empathy" and class" is used to express these pragmatic factors as one-dimensional notion. Then inequalities and equalities of them become conditions for selecting natural expressions. The authors of this paper formulate conditions as principles about lexical and syntactical constraints, and have implemented a sentence generation grammar using the unification grammar formalism.

  • Algorithms for Drift-Diffusion Device Simulation Using Massively Parallel Processors

    Eric TOMACRUZ  Jagesh V. SANGHAVI  Alberto SANGIOVANNI-VINCENTELLI  

     
    PAPER-Numerics

      Vol:
    E77-C No:2
      Page(s):
    248-254

    The performance of a drift-diffusion device simulator using massively parallel processors is improved by modifying the preconditioner for the iterative solver and by improving the initial guess for the Newton loop. A grid-to-processor mapping scheme is presented to implement the partitioned natural ordering preconditioner on the CM-5. A new preconditioner called the block partitioned natural ordering, which may include fill-ins, improves performance in terms of CPU time and convergence behavior on the CM-5. A multigrid discretization to implement a block Newton initial guess routine is observed to decrease the CPU time by a factor of two. Extensions of the initial guess routine show further reduction in the final fine grid linear iterations.

  • The Enhancement of Electromigration Lifetime under High Frequency Pulsed Conditions

    Kazunori HIRAOKA  Kazumitsu YASUDA  

     
    PAPER-Reliability Testing

      Vol:
    E77-A No:1
      Page(s):
    195-203

    Experimental evidence of a two-step enhancement in electromigration lifetime is presented through pulsed testing that extends over a wide frequency range from 7 mHz to 50 MHz. It is also found, through an accompanying failure analysis, that the failure mechanism is not affected by current pulsing. Test samples were the lowew metal lines and the through-holes in double-level interconnects. The same results were obtained for both samples. The testing temperature of the test conductor was determined considering the Joule heating to eliminate errors in lifetime estimation due to temperature errors. A two-step enhancement in lifetime is extracted by normalizing the pulsed electromigration lifetime by the continuous one. The first step occurs in the frequency range from 0.1 to 10 kHz where the lifetime increases with (duty ratio)-2 and the second step occurs above 100 kHz with (duty ratio)-3. The transition frequency in the first-step enhancement shifts to the higher frequency region with a decrease in stress temperature or an increase in current density, whereas the transition frequency in the second step is not affected by these stress conditions. The lifetime enhancement is analyzed in relation to the relaxation process during the current pulsing. According to the two-step behavior, two distinct relaxation times are assumed as opposed to the single relaxation time in other proposed models. The results of the analysis agree with the experimental results for the dependence on the frequency and duty ratio of pulses. The two experimentally derived relaxation times are about 5 s and 1 µs.

  • PDM: Petri Net Based Development Methodology for Distributed Systems

    Mikio AOYAMA  

     
    INVITED PAPER

      Vol:
    E76-A No:10
      Page(s):
    1567-1579

    This article discusses on PDM (Petri net based Development Methodology) which integrates approaches, modeling methods, design methods and analysis methods in a coherent manner. Although various development techniques based on Petri nets have demonstrated advantages over conventional techniques, those techniques are rather ad hoc and lack an overall picture on entire development process. PDM anticipates to provide a refernce process model to develop distributed systems with various Petri net based development methods. Behavioral properties of distrbuted systems can be an appropriate application domain of PDM.

261-280hit(311hit)