The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.40

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E77-A No.1  (Publication Date:1994/01/25)

    Special Section on Cryptography and Information Security
  • FOREWORD

    Kenji KOYAMA  

     
    FOREWORD

      Page(s):
    1-1
  • A New Cryptanalytic Method for FEAL Cipher

    Mitsuru MATSUI  Atsuhiro YAMAGISHI  

     
    PAPER

      Page(s):
    2-7

    We propose a new known plaintext attakc of FEAL cipher. Our method differs from previous statistical ones in point of deriving the extended key in a deterministic way. As a result, it is possible to break FEAL-4 with 5 known plaintexts and FEAL-6 with 100 known plaintexts, respectively. Moreover, we show a method to break FEAL-8 with 215 known plaintexts faster than an exhaustive key search.

  • Message Authentication Codes and Differential Attack

    Kazuo OHTA  Mitsuru MATSUI  

     
    PAPER

      Page(s):
    8-14

    We discuss the security of Message Authentication Code (MAC) schemes from the viewpoint of differential attack, and propose an attack that is effective against DES-MAC and FEAL-MAC. The attack derives the secret authentication key in the chosen plaintext scenario. For example, DES(8-round)-MAC can be broken with 234 pairs of plaintext, while FEAL8-MAC can be broken with 222 pairs. The proposed attack is applicable to any MAC scheme, even if the 32-bits are randomly selected from among the 64-bits of ciphertext generated by a cryptosystem vulnerable to differential attack in the chosen plaintext scenario.

  • New Proposal and Comparison of Closure Tests--More Efficient than the CRYPTO'92 Test for DES--

    Hikaru MORITA  Kazuo OHTA  

     
    PAPER

      Page(s):
    15-19

    The well-known closure tests, the cycling closure test (CCT) and the meet-in-the-middle closure test (MCT), were introduced by Kaliski, Rivest and Sherman to analyze the algebraic properties of cryptosystems, and CCT indicates that DES is not closed. Though Coppersmith presented that DES can be proved not to be closed by a particular way, the closure tests can check various kinds of cryptosystems generally. Thus, successors to MCT and CCT have been proposed at CRYPTO. This paper expands the MCT successor, the switching closure test (SCT), to apply to the DES-like cryptosystems, and shows that this SCT variant is more efficient than the closure test proposed at CRYPTO'92, because the SCT variant establishes a better relationship between the computation cost and the probability of error (the evaluation index). The MCT successors are more important than the CCTs, because the MCTs can directly break closed cryptosystemes. Therefore, if you want to detect the closure property of cryptosystems generally, the SCT variant is better.

  • Identity-Based Non-interactive Key Sharing

    Hatsukazu TANAKA  

     
    PAPER

      Page(s):
    20-23

    In this paper an identity-based non-interactive key sharing scheme (IDNIKS) is proposed in order to realize the original concept of identity-based cryptosystem, of which secure realization scheme has not been proposed. First the necessary conditions for secure realization of IDNIKS are considered from two different poinrts of view: (i) the possibility to share a common-key non-interactively and (ii) the security for entity's conspiracy. Then a new non-interactive key sharing scheme is proposed, of which security depends on the difficulty of factoring. The most important contribution is to have succeeded in obtaining any entity's secret information as an exponent of the obtainer's identity information. The security of IDNIKS for entity's conspiracy is also considered in details.

  • Electronic Voting Scheme Allowing Open Objection to the Tally

    Kazue SAKO  

     
    PAPER

      Page(s):
    24-30

    In this paper, we present an electronic voting scheme with a single voting center using an anonymous channel. The proposed scheme is a 3-move protocol between each voter and the center, with one extra move if one wants to make objection to the tally. This objection can be broadcasted widely since it will not disclose the vote itself to the other parties besides the center. The main idea in the proposal is that each voter sends anonymously a public key signed by the center and an encrypted vote decryptable using this key. Since even the center cannot modify a received ballot to a different vote using the same public key, the key can be used as an evidence in making open objection to the tally.

  • Subliminal Channels for Transferring Signatures: Yet Another Cryptographic Primitive

    Kouichi SAKURAI  Toshiya ITOH  

     
    PAPER

      Page(s):
    31-38

    This paper considers the subliminal channel, hidden in an identification scheme, for transferring signatures. We observe the direct parallelization of the Fiat-Shamir identification scheme has a subliminal channel for the transmission of the digital signature. A positive aspect of this hidden channel supplies us how to transfer signatures without secure channels. As a formulation of such application, we introduce a new notion called privately recordable signature. The privately recordable signature is generated in an interactive protocol between a signer and a verifier, and only the verifier can keep the signatures although no third adversary can record the signatures. ln this scheme, then the disclosure of the verifier's private coin turns the signer's signature into the ordinary digital signature which is verified by anybody with the singer's public key. The basic idea of our construction suggests the novel primitive that a transferring securely signatures without secret channels could be constructed using only one-way function (without trapdoor).

  • Demonstrating Possession without Revealing Factors

    Hiroki SHIZUYA  Kenji KOYAMA  Toshiya ITOH  

     
    PAPER

      Page(s):
    39-46

    This paper presents a zero-knowledge interactive protocol that demonstrates two factors a and b of a composite number n (=ab) are really known by the prover, without revealing the factors themselves. Here the factors a and b need not be primes. The security of the protocol is based on the difficulty of computing discrete logarithms modulo a large prime.

  • On the Knowledge Tightness of Zero-Knowledge Proofs

    Toshiya ITOH  Atsushi KAWAKUBO  

     
    PAPER

      Page(s):
    47-55

    In this paper, we study the knowledge tightness of zero-knowledge proofs. To this end, we present a new measure for the knowledge tightness of zero-knowledge proofs and show that if a language L has a bounded round zero-knowledge proof with knowledge tightness t(|x|) 2 - |x|-c for some c 0, then L BPP and that any language L AM has a bounded round zero-knowledge proof with knowledge tightness t(|x|) 2-2-O(|x|) under the assumption that collision intractable hash functions exist. This implies that in the case of a bounded round zero-knowledge proof for a language L BPP, the optimal knowledge tightness is "2" unless AM = BPP. In addition, we show that any language L IP has an unbounded round zero-knowledge proof with knowledge tightness t(|x|) 1.5 under the assumption that nonuniformly secure probabilistic encryptions exist.

  • On the Knowledge Complexity of Arthur-Merlin Games

    Toshiya ITOH  Tatsuhiko KAKIMOTO  

     
    PAPER

      Page(s):
    56-64

    In this paper, we investigate the knowledge complexity of interactive proof systems and show that (1) under the blackbox simulation, if a language L has a bounded move public coin interactive proof system with polynomially bounded knowledge complexity in the hint sense, then the language L itself has a one move interactive proof system; and (2) under the blackbox simulation, if a language L has a three move private coin interactive proof system with polynomially bounded knowledge complexity in the hint sense, then the language L itself has a one move interactive proof system. These results imply that as long as the blackbox simulation is concerned, any language L AMMA is not allowed to have a bounded move public coin (or three move private coin) interactive proof system with polynomially bounded knowledge complexity in the hint sense unless AM = AM. In addition, we present a definite distinction between knowledge complexity in the hint sense and in the strict oracle sense, i.e., any language in AM (resp. IP) has a two (resp. unbounded) move public coin interactive proof system with polynomially bounded knowledge complexity in the strict oracle sense.

  • A Note on AM Languages Outside NP co-NP

    Hiroki SHIZUYA  Toshiya ITOH  

     
    PAPER

      Page(s):
    65-71

    In this paper we investigate the AM languages that seem to be located outside NP co-NP. We give two natural examples of such AM languages, GIP and GH, which stand for Graph Isomorphism Pattern and Graph Heterogeneity, respectively. We show that the GIP is in ΔP2 AM co-AM but is unlikely to be in NP co-NP, and that GH is in ΔP2 AM but is unlikely to be in NP co-AM. We also show that GIP is in SZK. We then discuss some structural properties related to those languages: Any language that is polynomial time truth-table reducible to GIP is in AM co-AM; GIP is in co-SZK if SZK co-SZK is closed under conjunctive polynomial time bounded-truth-table reducibility; Both GIP and GH are in DP. Here DP is the class of languages that can be expressed in the form X Y, where X NP and Y co-NP.

  • On Claw Free Families

    Wakaha OGATA  Kaoru KUROSAWA  

     
    PAPER

      Page(s):
    72-80

    This paper points out that there are two types of claw free families with respect to a level of claw freeness. We formulate them as weak claw free families and strong claw free families. Then, we present sufficient conditions for each type of claw free families. (A similar result is known for weak claw free families.) They are represented as some algebraic forms of one way functions. A new example of strong claw free families is also given.

  • Secure Addition Sequence and Its Application on the Server-Aided Secret Computation Protocols

    Chi-Sung LAIH  Sung-Ming YEN  

     
    PAPER

      Page(s):
    81-88

    Server aided secret computation (SASC) protocol also called the verifiable implicit asking protocol, is a protocol such that a powerful untrusted auxiliary device (server) can help a smart card (client) for computing a secret function efficiently. In this paper, we extend the concept of addition sequence to the secure addition sequence and develop an efficient algorithm to construct such sequence. By incorporating the secure addition sequence into the SASC protocol the performance of SASC protocol can be further enhanced.

  • New Key Generation Algorithm for RSA Cryptosystem

    Ryuichi SAKAI  Masakatu MORII  Masao KASAHARA  

     
    PAPER

      Page(s):
    89-97

    For improving the RSA cryptosystem, more desirable conditions on key structures have been intensively studied. Recently, M.J.Wiener presented a cryptanalytic attack on the use of small RSA secret exponents. To be secure against the Wiener's attack, the size of a secret exponent d should be chosen more than one-quarter of the size of the modulus n = pq (in bits). Besides, it is more desirable, in frequent cases, to make the public exponent e as small as possible. However if small d is chosen first, in such case as the digital signature system with smart card, the size of e is inevitably increased to that of n when we use the conventional key generation algorithm. This paper presents a new algorithm, Algorithm I, for generating of the secure RSA keys against Wiener's attack. With Algorithm I, it is possible to choose the smaller sizes of the RSA exponents under certain conditions on key parameters. For example, with Algorithm I, we can construct the RSA keys with the public exponent e of two-thirds and secret exponent d of one-third of the size of modulus n (in bits). Furthermore we present a modified version of Algorithm I, Algorithm II, for generating of the strong RSA keys having the difficulty of factoring n. Finally we analyze the performances of Algorithm I and Algorithm II.

  • Elliptic Curves Suitable for Cryptosystems

    Atsuko MIYAJI  

     
    PAPER

      Page(s):
    98-106

    Koblitz and Miller proposed a method by which the group of points on an elliptic curve over a finite field can be used for the public key cryptosystems instead of a finite field. To realize signature or identification schemes by a smart card, we need less data size stored in a smart card and less computation amount by it. In this paper, we show how to construct such elliptic curves while keeping security high.

  • Special Section on Reliability
  • FOREWORD

    Kuniomi NAKAMURA  

     
    FOREWORD

      Page(s):
    107-108
  • Software Reliability Measurement and Assessment with Stochastic Differential Equations

    Shigeru YAMADA  Mitsuhiro KIMURA  Hiroaki TANAKA  Shunji OSAKI  

     
    PAPER-Software Reliability

      Page(s):
    109-116

    In this paper, we propose a plausible software reliability growth model by applying a mathematical technique of stochastic differential equations. First, we extend a basic differential equation describing the average behavior of software fault-detection processes during the testing phase to a stochastic differential equation of ItÔ type, and derive a probability distribution of its solution processes. Second, we obtain several software reliability measures from the probability distribution. Finally, applying a method of maximum-likelihood we estimate unknown parameters in our model by using available data in the actual software testing procedures, and numerically show the stochastic behavior of the number of faults remaining in the software system. Further, the model is compared among the existing software reliability growth models in terms of goodness-of-fit.

  • Studies of Systems Reliability Growth by the Analysis on Decreasing Rate of Unavailability

    Masayoshi FURUYA  

     
    PAPER-System Reliability

      Page(s):
    117-121

    This is a full text of my presentation titled "Evaluation of Maintenability Improvement by Systems Reliability Growth" at the First Beijing International Conference on Reliability Maintenability and Safety (BICRMS'92). This thesis describes evaluation methods of reliability growth for field working systems by surveying maintenability improvement. And it also touch upon customer satisfaction. As unavailability is suitable for measuring reliability, I use in this thesis a decrease in unavailability per month as a means to evaluate reliability and its growth. "Maintenability" is broadly defined as a system's capability to maintain, repair and recover its functions with the aid of failsoft and RAISIS. The term "Customer satisfaction" is difficult to define, but on the practical market basis it can be fairly easily and objectively measured by examining the cancel rate by customers. This thesis includes topics such as: (1) When a system is in disorder it can restore its original functions although, strictly speaking, such system changes are classified as another systems statistically. (2) Despite this, we need to evaluate a specific system's reliability continously, and study reliability growth, industrial life, and customer satisfaction. Unavailability can be reduced by improving systems through upgrading component.

  • MTBF for Consecutive-k-out-of-n: F Systems with Nonidentical Component Availabilities

    Masafumi SASAKI  Naohiko YAMAGUCHI  Tetsushi YUGE  Shigeru YANAGI  

     
    PAPER-System Reliability

      Page(s):
    122-128

    Mean Time Between Failures (MTBF) is an important measure of practical repairable systems, but it has not been obtained for a repairable linear consecutive-k-out-of-n: F system. We first present a general formula for the (steady-state) availability of a repairable linear consecutive-k-out-of-n: F system with nonidentical components by employing the cut set approach or a topological availability method. Second, we present a general formula for frequency of system failures of a repairable linear consecutive-k-out-of-n: F system with nonidentical components. Then the MTBF for the repairable linear consecutive-k-out-of-n: F system is shown by using the frequency of system failure and availability. Lastly, we derive some figures which show the relationship between the MTBF and repair rate µorρ(=λ/µ) in the repairable linear consecutive-k-out-of-n: F system. The figures can be easily used and are useful for reliability design.

  • Reliability of a 3-State System Subject to Flow Quantity Constraint

    Tetsushi YUGE  Masafumi SASAKI  Shigeru YANAGI  

     
    PAPER-System Reliability

      Page(s):
    129-133

    This paper presents two approaches for computing the reliability of complex networks subject to two kinds of failure, open failure and shorted failure. The reliabilities of some series-parallel networks are considered by many analysts. However a practical system is more complex. The methods given in this paper can be applied not only to a series-parallel network but also to a non-series-parallel network which is composed of non-identical and independent components subject to two kinds of failure. This paper also deals with a network subject to flow quantity constraint such as the one which is required to control j or more separate paths. For such a system it is difficult to obtain system reliability because the number of states to be considered in this system is extremely large compared to a conventional 2-state device system. In this paper we obtain the reliabilities for such systems by a combinatorial approach and by a simulation approach.

  • A Factored Reliability Formula for Directed Source-to-All-Terminal Networks

    Yoichi HIGASHIYAMA  Hiromu ARIYOSHI  Isao SHIRAKAWA  Shogo OHBA  

     
    PAPER-System Reliability

      Page(s):
    134-143

    In a probabilistic graph (network), source-to-all-terminal (SAT) reliability may be defined as the probability that there exists at least one path consisting only of successful arcs from source vertex s to every other vertex. In this paper, we define an optimal SAT reliability formula to be the one with minimal number of literals or operators. At first, this paper describes an arc-reductions (open- or short-circuiting) method for obtaining a factored formula of directed graph. Next, we discuss a simple strategy to get an optimal formula being a product of the reliability formulas of vertex-section graphs, each of which contains a distinct strongly connected component of the given graph. This method reduces the computing cost and data processing effort required tu generate the optimal factored formula, which contains no identical product terms.

  • Some Remarks on MTBF's for Non-homogeneous Poisson Processes

    Hirofumi KOSHIMAE  Hiroaki TANAKA  Shunji OSAKI  

     
    PAPER-System Reliability

      Page(s):
    144-149

    Non-homogeneous Poisson Processes (NHPP's) can be applied for analyzing reliability growth models for hardware and/or software. Evaluating the Mean Time Between Failures (MTBF's) for such processes, we can evaluate the present status (the degree of improvement). However, it is difficult to evaluate the MTBF's for such processes analytically except the simplest cases. The so-called instantaneous MTBF's which can be easily evaluated are applied in practice instead of the exact MTBF's. In this paper, we discuss both MTBF's analytically, and derive the conditions for the existence of both exact and instantaneous MTBF's. We further illustrate both MTBF's for the Weibull process and S-shaped reliability growth model numerically.

  • Abnormal Epitaxial Layer of AlGaAs/GaAs Solar Cells for Space Applications

    Sumio MATSUDA  Masato UESUGI  Susumu YOSHIDA  

     
    PAPER-Failure Physics and Failure Analysis

      Page(s):
    150-157

    We found degraded output power due to discoloration of an abnormal epitaxial layer caused by supercooling of residual melt in liquid phase epitaxy (LPE) process of AlGaAs/GaAs heteroface solar cells developed to improve conversion efficiency of solar cells for satellites. We studied the discoloration mechanism and found effective methods for obtaning a good epitaxial layer. Using these results, we manufactured about 80,000 pieces of solar cells and employed them in the Japanese domestic Communication Satellite-3 (CS-3) launched by National Space Development Agency of Japan (NASDA). Five years after launch, these solar cells are still supplying the output power than predicted. This paper describes reliability improvements for the surface of epitaxial layer and successful results aftes 5 years of space operation.

  • A Study on Reliability and Failure Mechanism of T-Shaped Gate HEMTs

    Takahide ISHIKAWA  Kenji HOSOGI  Masafumi KATSUMATA  Hiroyuki MINAMI  Yasuo MITSUI  

     
    PAPER-Failure Physics and Failure Analysis

      Page(s):
    158-165

    This paper describes the reliability on recess type T-shaped gate HEMTs and their major failure mechanism investigated by accelerated life tests and following failure analysis. In this study, high temperature storage tests with a DC bias condition have been conducted on three different recess depths of 100, 125, and 150 nm. The results have clarified that the shallow recess devices of under 125 nm depth have no degration in minimum noise figure Fmin or gain Ga characteristics, indicating that standard HEMT devices, whose recess depth is chosen to be far under 125 nm, possess a sufficient reliability level. However, the devices with deep recess of 150 nm have shown degradation in both Fmin and Ga. Precise failure analyses including SEM observation and von Mises stress simulation have firstly revealed that the main failure mode in deeply recessed T-shaped gate HEMTs is increase in gate electrode's parasitic resistance Rg, which is caused by separation of "head" and "stem" parts of the T-shaped gate electrode due to thermo-mechanical stress concentration.

  • Improvement of "Soft Breakdown" Leakage of off-State nMOSFETs Induced by HBM ESD Events Using Drain Engineering for LDD Structure

    Ikuo KURACHI  Yasuhiro FUKUDA  

     
    PAPER-Failure Physics and Failure Analysis

      Page(s):
    166-173

    Leakage enhancement after an ESD event has been analyzed for output buffer LDD MOSFETs. The HBM ESD failure threshold for the LDD MOSFETs is only 200-300 V and the failure is the leakage enhancement of the off-state MOSFETs called as "soft breakdown" leakage. This leakage enhancement is supposed to be caused by trapped electrons in the gate oxide and/or creation of interface states at the gate overlapped drain region due to snap-back stress during the ESD event. The mechanism of the lekage can be explained by band-to-band and/or interface state-to-band tunneling of electrons. The improvement of the HBM ESD threshold has been also evaluated by using two types of drain engineering which are additional arsenic implantation for the output LDD MOSFETs and "offset" gate MOSFET as a protection circuit for the output pins. By using these drain engineering, the threshold can be improved to more than 2000 V.

  • Focused Ion Beam Applications to Failure Analysis of Si Device Chip

    Kiyoshi NIKAWA  

     
    PAPER-Failure Physics and Failure Analysis

      Page(s):
    174-179

    New focused ion beam (FIB) methods for microscopic cross-sectioning and observation, microscopic crosssectioning and elemental analysis, and aluminum film microstructure observation are presented. The new methods are compared to the conventional methods and the conventional FIB methods, from the four viewpoints such as easiness of analysis, analysis time, spatial resolution, and pinpointing precision. The new FIB methods, as a result, are shown to be the best ones totally judging from the viewpoints shown above.

  • Barrier Metal Effect on Electro- and Stress-Migration

    Tetsuaki WADA  

     
    PAPER-Failure Physics and Failure Analysis

      Page(s):
    180-186

    A new effect of barrier metal laid under 1st aluminum layer on electromigration has been found in interconnect vias. This effect can be explained by Si nodules at vias. Stress induced open failure occurred at viaholes and depends on the size of the vias. Stress-migration at vias can be prevented by TiN barrier metal between 1st and 2nd metals. Reliability of electro- and stress-migration at interconnect vias can be explosively improved by using TiN barrier metal.

  • Via Electromigration Characteristics in Aluminum Based Multilevel Interconnection

    Takahisa YAMAHA  Masaru NAITO  Tadahiko HOTTA  

     
    PAPER-Failure Physics and Failure Analysis

      Page(s):
    187-194

    Via electromigration (EM) performance of aluminum based metallization (AL) systems has been investigated for vias chains of 1500-4000 vias of 1.0 micron diameter. The results show that via EM lifetime can not be enhanced by a simple increase of M2 step coverage in AL/AL vias because the EM induced voids are formed at AL/AL via interface where electrons flow from Ml to M2 even in the case of very poor M2 step coverage. The voids are induced by the boundary layer in AL/AL vias, where a temperature gradient causes discontinuity of aluminum atoms flux. The failure location is not moved though via EM lifetime can be improved by controlling stress in passivation, sputter etch removal thickness and grain size of the first metal. Next, the effect of the boundary layer are eliminated by depositing titanium under the second aluminum or depositing WSi on the first aluminum. In the both cases, via EM lifetime are improved and the failure locations are changed. Especially WSi layer suppresses the voids formation rather than titanium. Models for the failure mechanism in each metallization system are further discussed.

  • The Enhancement of Electromigration Lifetime under High Frequency Pulsed Conditions

    Kazunori HIRAOKA  Kazumitsu YASUDA  

     
    PAPER-Reliability Testing

      Page(s):
    195-203

    Experimental evidence of a two-step enhancement in electromigration lifetime is presented through pulsed testing that extends over a wide frequency range from 7 mHz to 50 MHz. It is also found, through an accompanying failure analysis, that the failure mechanism is not affected by current pulsing. Test samples were the lowew metal lines and the through-holes in double-level interconnects. The same results were obtained for both samples. The testing temperature of the test conductor was determined considering the Joule heating to eliminate errors in lifetime estimation due to temperature errors. A two-step enhancement in lifetime is extracted by normalizing the pulsed electromigration lifetime by the continuous one. The first step occurs in the frequency range from 0.1 to 10 kHz where the lifetime increases with (duty ratio)-2 and the second step occurs above 100 kHz with (duty ratio)-3. The transition frequency in the first-step enhancement shifts to the higher frequency region with a decrease in stress temperature or an increase in current density, whereas the transition frequency in the second step is not affected by these stress conditions. The lifetime enhancement is analyzed in relation to the relaxation process during the current pulsing. According to the two-step behavior, two distinct relaxation times are assumed as opposed to the single relaxation time in other proposed models. The results of the analysis agree with the experimental results for the dependence on the frequency and duty ratio of pulses. The two experimentally derived relaxation times are about 5 s and 1 µs.

  • Evalution of the SO2 and NO2 Mixed Gas Tests for Electronic Parts

    Sadao IDA  Atsumi KURAMOCHI  Hiroshi WATANABE  Mitsuhiko KOYAMA  Kazutoshi GOTO  

     
    PAPER-Reliability Testing

      Page(s):
    204-207

    This paper describes mixed gas systems of SO2 and NO2 which are the essential corrosive gases in an ordinary atmospheric environment of electronic parts. It describes the corrosion product compositions and the behavior of copper in mixed and separate gases. Results of our tests show the following: (1) The weight of corrosion products with the SO2-NO2 mixed gas approximate the sum of those with the individual gases, however, the corrosion products of SO2 are affected by NO2. (2) Tests of the SO2-NO2 mixed gas closely simulates tests of electronic parts in the ordinary atmospheric environment.

  • Long-Term Reliability Testing of Electric Double-Layer Capacitors

    Munekazu AOKI  Kazuhiko SATO  Yoshihiro KOBAYASHI  

     
    PAPER-Evaluation of Reliability Improvement

      Page(s):
    208-212

    It has been 15 years since we started producing the electric double-layer capacitors (also known as Super Capacitor) in 1978. Over the years we have introduced improvements that increased reliability and increased life. For example, after subjecting capacitors manufactured in 1984 and 1990 to load life tests (70, 5.5 V) for 2,000 hours, we discovered that the rate of change in capacitance (ΔC/C) of capacitors manufactured in 1990 was less than one-half that of capacitors manufactured in 1984. This shows that we have successfully increased the life of our electric double-layer capacitors. We conducted investigations regarding factors that contribute to volume of the electrolyte solution and better sealing properties. In the load life test, we observed that when the ratio of the weights of the electrolyte solution and the powdered activated carbon (hereinafter referred to as LB) was increased, the time it took before ΔC/C reached -30% was lengthened. This means that increasing LB also increases life. Furthermore, we also observed that when the gas permeability rate of the collector's rubber material was decreased in the load life test (70, 5.5 V), the time it took befor (ΔC/C) reached -30% was longer. Therefore life is dependent on the gas permeability rate (sealing property) of the collector rubber.

  • High Reliability Design Method of LC Tuning Circuit and Substantiation of Aging Characteristics for 20 Years

    Mitsugi SAITA  Tatsuo YOSHIE  Katsumi WATANABE  Kiyoshi MURAMORI  

     
    PAPER-Evaluation of Reliability Improvement

      Page(s):
    213-219

    In 1963, the authors began to develop a tuning circuit (hereafter referred to as the 'circuit') consisting of an inductor, fixed capacitors and a variable capacitor. The circuit required very high accuracy and stability, and the aging influence on resonant frequency needed to be Δf/f0 0.12% for 20 years. When we started, there was no methodology available for designing such a long-term stable circuit, so we reinvestigated our previous studies concerning aging characteristics and formed a design concept. We designed the circuit by bearing in mind that an inductor was subject to natural and stress demagnetization (as indicated by disaccommodation), and assumed that a capacitor changed its characteristics linearly over a logarithmic scale of time. (This assumption was based on short-term test results derived from previous studies.) We measured the aging characteristics of the circuits at room temperature for 20 years, from 1966. The measurement results from the 20-year study revealed that the aging characteristics predicted by the design concept were reasonably accurate.

  • Improvement of Reliability of Large-Sized Ceramic Capacitors and Dummy Resistors for the High Power Transmitter

    Tohru MIZOKAMI  Hiroki TAKAZAWA  Eiichi KAWABATA  Yuzi OGATA  Haruo OHTA  Kazuaki WAKAI  Kazuhisa HAYEIWA  

     
    PAPER-Evaluation of Reliability Improvement

      Page(s):
    220-227

    This paper describes the effective countermeasures for exfoliation of large-sized ceramic capacitors, deterioration of dummy resistors and developement of a spark sensor with UVtrons at 300-500 kW transmitting stations. Cracks and exfoliation were found at the electrode of large-sized ceramic capacitors in the output circuit of the 500 kW transmitter. The exfoliation was caused by the temperature rise and the thermal fatigues at the electrode with the Nickel plating including Irons. A pure Nickel-plated electrode including no Irons and a new soldering method using disk-typed solder with a large adhesive area are employed in order to reduce the temperature rise. The temperature rise of the improved capacitor was 18 lower than the conventional one. Deterioration of ELEMA resistors of the 300 kW dummy antenna was discovered. The damage of the resistor was caused by the concentration of the electric current followed by the thermal stress cycle which made mechanical exhaustion at the electrode. Therefore, oval-shaped type resistors with much longer electric current path (20% up) to suppress the concentration of current flow and much slower temperature rise are newly developed. In case that sparks occurred at DC or RF high voltage impressed sections of the high power transmitting equipment, the discharged points could be seriously damaged by the transmitter energy itself. In orded to prevent this, a spark detector using UV (Ultra violet) trons is developed and installed at the matchign circuit of the 500 kW transmitter. Conventional UV sensors with only one UVtron could not detect feeble discharges and sparks with a duration time of less than 150 ms because of false outputs by the back ground noise. Since choosing three out of four UV trons system is employed, possibility producing a false output will be just one to 445 years theoretically. This means extremely reliable and sensitive spark detection system are constructed. These countermeasures have improved reliability of the transmitting equipment greatly. No damages have been found in the transmitters ever since.

  • Optimal Redundancy of Systems for Minimizing the Probability of Dangerous Errors

    Kyoichi NAKASHIMA  Hitoshi MATZNAGA  

     
    PAPER-Reliability and Safety

      Page(s):
    228-236

    For systems in which the probability that an incorrect output is observed differs with input values, we adopt the redundant usage of n copies of identical systems which we call the n-redundant system. This paper presents a method to find the optimal redundancy of systems for minimizing the probability of dangerous errors. First, it is proved that a k-out-of-n redundancy or a mixture of two kinds of k-out-of-n redundancies minimizes the probability of D-errors under the condition that the probability of output errors including both dangerous errors and safe errors is below a specified value. Next, an algorithm is given to find the optimal series-parallel redundancy of systems by using the properties of the distance between two structure functions.

  • Optimal Free-Sensors Allocation Problem in Safety Monitoring System

    Kenji TANAKA  Keiko SAITOH  

     
    LETTER-Reliability and Safety

      Page(s):
    237-239

    This paper proposes an optimal free-sensors allocation problem (OFSAP) in safety monitoring systems. OFSAP is the problem of deciding the optimal allocation of several sensors, which we call free sensors, to plural objects. The solution of OFSAP gives the optimal allocation which minimizes expected losses caused by failed dangerous (FD)-failures and failed safe (FS)-failures; a FD-failure is to fail to generate an alarm for unsafe object and a FS-failure is to generate an alarm for safe object. We show an unexpected result that a safer object should be monitored by more sensors under certain conditions.

  • Preventive Replacement Policies and Their Application to Weibull Distribution

    Michio HORIGOME  Yoshito KAWASAKI  Qin Qin CHEN  

     
    LETTER-Maintainability

      Page(s):
    240-243

    This letter deals with the reliability function in the case of periodic preventive replacement of items in order to increase MTBF, that is, two replacement policies; strictly periodic replacement (SPR) and randomly periodic replacement (RPR). We stress on simple introduction of the reliability theory under preventive replacement policies using the Laplace transform and obtain the theoretical results of SPR and RPR. Then these results are applied to the Weibull distribution and finally in order to show useful information of preventive replacement, the numerical results of SPR are provided.

  • A Note on Optimal Checkpoint Sequence Taking Account of Preventive Maintenance

    Masanori ODAGIRI  Naoto KAIO  Shunji OSAKI  

     
    LETTER-Maintainability

      Page(s):
    244-246

    Checkpointing is one of the most powerful tools to operate a computer system with high reliability. We should execute the optimal checkpointing in some sense. This note shows the optimal checkpoint sequence minimizing the expected loss, Numerical examples are shown for illustration.

  • Regular Section
  • A Combined Fast Adaptive Filter Algorithm with an Automatic Switching Method

    Youhua WANG  Kenji NAKAYAMA  

     
    PAPER-Adaptive Signal Processing

      Page(s):
    247-256

    This paper proposes a new combined fast algorithm for transversal adaptive filters. The fast transversal filter (FTF) algorithm and the normalized LMS (NLMS) are combined in the following way. In the initialization period, the FTF is used to obtain fast convergence. After converging, the algorithm is switched to the NLMS algorithm because the FTF cannot be used for a long time due to its numerical instability. Nonstationary environment, that is, time varying unknown system for instance, is classified into three categories: slow time varying, fast time varying and sudden time varying systems. The NLMS algorithm is applied to the first situation. In the latter two cases, however, the NLMS algorithm cannot provide a good performance. So, the FTF algorithm is selected. Switching between the two algorithms is automatically controlled by using the difference of the MSE sequence. If the difference exceeds a threshold, then the FTF is selected. Other wise, the NLMS is selected. Compared with the RLS algorithm, the proposed combined algorithm needs less computation, while maintaining the same performance. Furthermore, compared with the FTF algorithm, it provides numerically stable operation.

  • Continuous Relation between Models and System Performances--A Case Study for Optimal Servosystems--

    Hajime MAEDA  Shinzo KODAMA  

     
    PAPER-Control and Computing

      Page(s):
    257-262

    This paper is concerned with the continuous relation between models of the plant and the predicted performances of the system designed based on the models. To state the problem more precisely, let P be the transfer matrix of a plant model, and let A be the transfer matrix of interest of the designed system, which is regarded as a performance measure for evaluating the designed responses. A depends upon P and is written as A=A(P). From the practical point of view, it is necessary that the function A(P) should be continuous with respect to P. In this paper we consider the linear quadratic optimal servosystem with integrators (LQI) scheme as the design methodology, and prove that A(P) depends continuously on the plant transfer matrix P if the topology of the family of plants models is the graph topology. A numerical example is given for illustrating the result.

  • A Synthesis of Variable Wave Digital Filters

    Eiji WATANABE  Masato ITO  Nobuo MURAKOSHI  Akinori NISHIHARA  

     
    PAPER-Digital Signal Processing

      Page(s):
    263-271

    It is often desired to change the cutoff frequencies of digital filters in some applications like digital electronic instruments. This paper proposes a design of variable lowpass digital filters with wider ranges of cutoff frequencies than conventional designs. Wave digital filters are used for the prototypes of variable filters. The proposed design is based on the frequency scaling in the s-domain, while the conventional ones are based on the z-domain lowpass-to-lowpass transformations. The first-order approximation by the Taylor series expansion is used to make multiplier coefficients in a wave digital filters obtained from a frequency-scaled LC filter become linear functions of the scaling parameter, which is similar to the conventional design. Furthermore this paper discusses the reduction of the approximation error. The curvature is introduced as the figure of the quality of the first-order approximation. The use of the second-order approximation to large-curvature multiplier coefficients instead of the first-order one is proposed.

  • A Method for Estimating the Mean-Squared Error of Distributed Arithmetic

    Jun TAKEDA  Shin-ichi URAMOTO  Masahiko YOSHIMOTO  

     
    PAPER-Digital Signal Processing

      Page(s):
    272-280

    It is important for LSI system designers to estimate computational errors when designing LSI's for numeric computations. Both for the prediction of the errors at an early stage of designing and for the choice of a proper hardware configuration to achieve a target performance, it is desirable that the errors can be estimated in terms of a minimum of parameters. This paper presents a theoretical error analysis of multiply-accumulation implemented by distributed arithmetic(DA) and proposes a new method for estimating the mean-squared error. DA is a method of implementing the multiply-accumulation that is defined as an inner product of an input vector and a fixed coefficient vector. Using a ROM which stores partial products. DA calculates the output by accumulating the partial products bitserially. As DA uses no parallel multipliers, it needs a smaller chip area than methods using parallel multipliers. Thus DA is effectively utilitzed for the LSI implementation of a digital signal processing system which requires the multiply-accumulation. It has been known that, if the input data are uniformly distributed, the mean-squared error of the multiply-accumulation implemented by DA is a function of only the word lengths of the input, the output, and the ROM. The proposed method for the error estimation can calculate the mean-squared error by using the same parameters even when the input data are not uniformly distributed. The basic idea of the method is to regard the input data as a combination of uniformly distributed partial data with a different word length. Then the mean-squared error can be predicted as a weighted sum of the contribution of each partial data, where the weight is the ratio of the partial data to the total input data. Finally, the method is applied to a two-dimensional inverse discrete cosine transform (IDCT) and the practicability of the method is confirmed by computer simulations of the IDCT implemented by DA.

  • Function Representation by Fuzzy Reasoning

    Shin KAWASE  Niro YANAGIHARA  

     
    PAPER-Fuzzy Theory

      Page(s):
    281-290

    This paper is concerned with the problem of (exactly) representing given functions by fuzzy reasoning. We consider function representation by the fuzzy reasoning method using linguistic truth values, which is a generalization of fuzzy reasoning due to Zadeh. Some conditions for functions to be representable are given, by which it is shown that very large class of functions can be representable by this method. Some examples illustrating how to find "if-then rules" for fuzzy reasoning are shown. Further, in the appendix an example is given to show that the generalization is significant for the problem of function representation.

  • An Equivalence Net-Condition between Place-Liveness and Transition -Liveness of Petri Nets and Their Initial-Marking-Based Necessary and Sufficient Liveness Conditions

    Tadashi MATSUMOTO  Kohkichi TSUJI  

     
    PAPER-Graphs, Networks and Matroids

      Page(s):
    291-301

    The structural necessary and sufficient condition for "the transition-liveness means the place-liveness and vice-versa" of a subclass NII of general Petri nets is given as "the place and transition live Petri net, or PTL net, ÑII". Furthermore, "the one-token-condition Petri net, or OTC net, II" which means that every MSDL (minimal structural deadlock) is "transition and place live" under at least one initial token, i.e., II is "transition and place live" under the above initial marking. These subclasses NII, ÑII( NII), and II(ÑII) are almost the general Petri nets except at least one MSTR(minimal structural trap) and at least one pair of "a virtual MSTR or a virtual STR" and "a virtual MSDL" of an MBTR (minimal behavioral trap) in connection with making an MSDL transition-live.

  • Analog Method for Solving Combinatorial Optimization Problems

    Kiichi URAHAMA  

     
    PAPER-Neural Networks

      Page(s):
    302-308

    An analog approach alternative to the Hopfield method is presented for solving constrained combinatorial optimization problems. In this new method, a saddle point of a Lagrangian function is searched using a constrained dynamical system with the aid of an appropriate transformation of variables. This method always gives feasible solutions in contrast to the Hopfield scheme which often outputs infeasible solutions. The convergence of the method is proved theoretically and some effective schemes are recommended for eliminating some variables for the case we resort to numerical simulation. An analog electronic circuit is devised which implements this method. This circuit requires fewer wirings than the Hopfield networks. Furthermore this circuit dissipates little electrical power owing to subthreshold operation of MOS transistors. An annealing process, if desired, can be performed easily by gradual increase in resistance of linear resistors in contrast to the Hopfield circuit which requires the variation in the gain of amplifiers. The objective function called an energy is ensured theoretically to decrease throughout the annealing process.

  • Piecewise-Linear Analysis of Nonlinear Resistive Networks Containing Gummel-Poon Models or Shichman-Hodges Models

    Kiyotaka YAMAMURA  

     
    PAPER-Nonlinear Circuits and Systems

      Page(s):
    309-316

    Finding DC solutions of nonlinear networks is one of the most difficult tasks in circuit simulation, and many circuit designers experience difficulties in finding DC solutions using Newton's method. Piecewise-linear analysis has been studied to overcome this difficulty. However, efficient piecewiselinear algorithms have not been proposed for nonlinear resistive networks containing the Gummel-Poon models or the Shichman-Hodges models. In this paper, a new piecewise-linear algorithm is presented for solving nonlinear resistive networks containing these sophisticated transistor models. The basic idea of the algorithm is to exploit the special structure of the nonlinear network equations, namely, the pairwise-separability. The proposed algorithm is globally convergent and much more efficient than the conventional simplical-type piecewise-linear algorithms.

  • A Sign Test for Finding All Solutions of Piecewise-Linear Resistive Circuits

    Kiyotaka YAMAMURA  

     
    PAPER-Nonlinear Circuits and Systems

      Page(s):
    317-323

    An efficient algorithm is presented for finding all solutions of piecewise-linear resistive circuits. In this algorithm, a simple sign test is performed to eliminate many linear regions that do not contain a solution. This makes the number of simultaneous linear equations to be solved much smaller. This test, in its original form, is applied to each linear region; but this is time-consuming because the number of linear regions is generally very large. In this paper, it is shown that the sign test can be applied to super-regions consisting of adjacent linear regions. Therefore, many linear regions are discarded at the same time, and the computational efficiency of the algorithm is substantially improved. The branch-and-bound method is used in applying the sign test to super-regions. Some numerical examples are given, and it is shown that all solutions are computed very rapidly. The proposed algorithm is simple, efficient, and can be easily programmed.

  • Identification of Chaotic Dynamical Systems with Back-Propagation Neural Networks

    Masaharu ADACHI  Makoto KOTANI  

     
    PAPER-Nonlinear Phenomena and Analysis

      Page(s):
    324-334

    In this paper, we clarify fundamental properties of conventional back-propagation neural networks to learn chaotic dynamical systems by some numerical experiments. We train three-layers networks using back-propagation algorithm with the data from two examples of two-dimensional discrete dynamical systems. We qualitatively evaluate the trained networks with two methods analysing geometrical mapping structure and reconstruction of an attractor by the recurrent feedback of the networks. We also quantitatively evaluate the trained networks with calculation of the Lyapunov exponents that represent the dynamics of the recurrent networks is chaotic or periodic. In many cases, the trained networks show high ability of extracting mapping structures of original two-dimensional dynamical systems. We confirm that the Lyapunov exponents of the trained networks correspond to whether the reconstructed attractors by the recurrent networks are chaotic or periodic.

  • A Current-Mode Implementation of a Chaotic Neuron Model Using a SI Integrator

    Nobuo KANOU  Yoshihiko HORIO  Kazuyuki AIHARA  Shogo NAKAMURA  

     
    LETTER-Nonlinear Circuits and Systems

      Page(s):
    335-338

    This paper presents an improved current-mode circuit for implementation of a chaotic neuron model. The proposed circuit uses a switched-current integrator and a nonlinear output function circuit, which is based on an operational transconductance amplifier, as building blocks. Is is shown by SPICE simulations and experiments using discrete elements that the proposed circuit well replicates the behavior of the chaotic neuron model.