The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] IT(16991hit)

16281-16300hit(16991hit)

  • A Note on Optimal Checkpoint Sequence Taking Account of Preventive Maintenance

    Masanori ODAGIRI  Naoto KAIO  Shunji OSAKI  

     
    LETTER-Maintainability

      Vol:
    E77-A No:1
      Page(s):
    244-246

    Checkpointing is one of the most powerful tools to operate a computer system with high reliability. We should execute the optimal checkpointing in some sense. This note shows the optimal checkpoint sequence minimizing the expected loss, Numerical examples are shown for illustration.

  • MTBF for Consecutive-k-out-of-n: F Systems with Nonidentical Component Availabilities

    Masafumi SASAKI  Naohiko YAMAGUCHI  Tetsushi YUGE  Shigeru YANAGI  

     
    PAPER-System Reliability

      Vol:
    E77-A No:1
      Page(s):
    122-128

    Mean Time Between Failures (MTBF) is an important measure of practical repairable systems, but it has not been obtained for a repairable linear consecutive-k-out-of-n: F system. We first present a general formula for the (steady-state) availability of a repairable linear consecutive-k-out-of-n: F system with nonidentical components by employing the cut set approach or a topological availability method. Second, we present a general formula for frequency of system failures of a repairable linear consecutive-k-out-of-n: F system with nonidentical components. Then the MTBF for the repairable linear consecutive-k-out-of-n: F system is shown by using the frequency of system failure and availability. Lastly, we derive some figures which show the relationship between the MTBF and repair rate µorρ(=λ/µ) in the repairable linear consecutive-k-out-of-n: F system. The figures can be easily used and are useful for reliability design.

  • Software Reliability Measurement and Assessment with Stochastic Differential Equations

    Shigeru YAMADA  Mitsuhiro KIMURA  Hiroaki TANAKA  Shunji OSAKI  

     
    PAPER-Software Reliability

      Vol:
    E77-A No:1
      Page(s):
    109-116

    In this paper, we propose a plausible software reliability growth model by applying a mathematical technique of stochastic differential equations. First, we extend a basic differential equation describing the average behavior of software fault-detection processes during the testing phase to a stochastic differential equation of ItÔ type, and derive a probability distribution of its solution processes. Second, we obtain several software reliability measures from the probability distribution. Finally, applying a method of maximum-likelihood we estimate unknown parameters in our model by using available data in the actual software testing procedures, and numerically show the stochastic behavior of the number of faults remaining in the software system. Further, the model is compared among the existing software reliability growth models in terms of goodness-of-fit.

  • Piecewise-Linear Analysis of Nonlinear Resistive Networks Containing Gummel-Poon Models or Shichman-Hodges Models

    Kiyotaka YAMAMURA  

     
    PAPER-Nonlinear Circuits and Systems

      Vol:
    E77-A No:1
      Page(s):
    309-316

    Finding DC solutions of nonlinear networks is one of the most difficult tasks in circuit simulation, and many circuit designers experience difficulties in finding DC solutions using Newton's method. Piecewise-linear analysis has been studied to overcome this difficulty. However, efficient piecewiselinear algorithms have not been proposed for nonlinear resistive networks containing the Gummel-Poon models or the Shichman-Hodges models. In this paper, a new piecewise-linear algorithm is presented for solving nonlinear resistive networks containing these sophisticated transistor models. The basic idea of the algorithm is to exploit the special structure of the nonlinear network equations, namely, the pairwise-separability. The proposed algorithm is globally convergent and much more efficient than the conventional simplical-type piecewise-linear algorithms.

  • A Synthesis of Variable Wave Digital Filters

    Eiji WATANABE  Masato ITO  Nobuo MURAKOSHI  Akinori NISHIHARA  

     
    PAPER-Digital Signal Processing

      Vol:
    E77-A No:1
      Page(s):
    263-271

    It is often desired to change the cutoff frequencies of digital filters in some applications like digital electronic instruments. This paper proposes a design of variable lowpass digital filters with wider ranges of cutoff frequencies than conventional designs. Wave digital filters are used for the prototypes of variable filters. The proposed design is based on the frequency scaling in the s-domain, while the conventional ones are based on the z-domain lowpass-to-lowpass transformations. The first-order approximation by the Taylor series expansion is used to make multiplier coefficients in a wave digital filters obtained from a frequency-scaled LC filter become linear functions of the scaling parameter, which is similar to the conventional design. Furthermore this paper discusses the reduction of the approximation error. The curvature is introduced as the figure of the quality of the first-order approximation. The use of the second-order approximation to large-curvature multiplier coefficients instead of the first-order one is proposed.

  • Secure Addition Sequence and Its Application on the Server-Aided Secret Computation Protocols

    Chi-Sung LAIH  Sung-Ming YEN  

     
    PAPER

      Vol:
    E77-A No:1
      Page(s):
    81-88

    Server aided secret computation (SASC) protocol also called the verifiable implicit asking protocol, is a protocol such that a powerful untrusted auxiliary device (server) can help a smart card (client) for computing a secret function efficiently. In this paper, we extend the concept of addition sequence to the secure addition sequence and develop an efficient algorithm to construct such sequence. By incorporating the secure addition sequence into the SASC protocol the performance of SASC protocol can be further enhanced.

  • Identity-Based Non-interactive Key Sharing

    Hatsukazu TANAKA  

     
    PAPER

      Vol:
    E77-A No:1
      Page(s):
    20-23

    In this paper an identity-based non-interactive key sharing scheme (IDNIKS) is proposed in order to realize the original concept of identity-based cryptosystem, of which secure realization scheme has not been proposed. First the necessary conditions for secure realization of IDNIKS are considered from two different poinrts of view: (i) the possibility to share a common-key non-interactively and (ii) the security for entity's conspiracy. Then a new non-interactive key sharing scheme is proposed, of which security depends on the difficulty of factoring. The most important contribution is to have succeeded in obtaining any entity's secret information as an exponent of the obtainer's identity information. The security of IDNIKS for entity's conspiracy is also considered in details.

  • Electronic Voting Scheme Allowing Open Objection to the Tally

    Kazue SAKO  

     
    PAPER

      Vol:
    E77-A No:1
      Page(s):
    24-30

    In this paper, we present an electronic voting scheme with a single voting center using an anonymous channel. The proposed scheme is a 3-move protocol between each voter and the center, with one extra move if one wants to make objection to the tally. This objection can be broadcasted widely since it will not disclose the vote itself to the other parties besides the center. The main idea in the proposal is that each voter sends anonymously a public key signed by the center and an encrypted vote decryptable using this key. Since even the center cannot modify a received ballot to a different vote using the same public key, the key can be used as an evidence in making open objection to the tally.

  • Optimal Free-Sensors Allocation Problem in Safety Monitoring System

    Kenji TANAKA  Keiko SAITOH  

     
    LETTER-Reliability and Safety

      Vol:
    E77-A No:1
      Page(s):
    237-239

    This paper proposes an optimal free-sensors allocation problem (OFSAP) in safety monitoring systems. OFSAP is the problem of deciding the optimal allocation of several sensors, which we call free sensors, to plural objects. The solution of OFSAP gives the optimal allocation which minimizes expected losses caused by failed dangerous (FD)-failures and failed safe (FS)-failures; a FD-failure is to fail to generate an alarm for unsafe object and a FS-failure is to generate an alarm for safe object. We show an unexpected result that a safer object should be monitored by more sensors under certain conditions.

  • A Factored Reliability Formula for Directed Source-to-All-Terminal Networks

    Yoichi HIGASHIYAMA  Hiromu ARIYOSHI  Isao SHIRAKAWA  Shogo OHBA  

     
    PAPER-System Reliability

      Vol:
    E77-A No:1
      Page(s):
    134-143

    In a probabilistic graph (network), source-to-all-terminal (SAT) reliability may be defined as the probability that there exists at least one path consisting only of successful arcs from source vertex s to every other vertex. In this paper, we define an optimal SAT reliability formula to be the one with minimal number of literals or operators. At first, this paper describes an arc-reductions (open- or short-circuiting) method for obtaining a factored formula of directed graph. Next, we discuss a simple strategy to get an optimal formula being a product of the reliability formulas of vertex-section graphs, each of which contains a distinct strongly connected component of the given graph. This method reduces the computing cost and data processing effort required tu generate the optimal factored formula, which contains no identical product terms.

  • On the Knowledge Tightness of Zero-Knowledge Proofs

    Toshiya ITOH  Atsushi KAWAKUBO  

     
    PAPER

      Vol:
    E77-A No:1
      Page(s):
    47-55

    In this paper, we study the knowledge tightness of zero-knowledge proofs. To this end, we present a new measure for the knowledge tightness of zero-knowledge proofs and show that if a language L has a bounded round zero-knowledge proof with knowledge tightness t(|x|) 2 - |x|-c for some c 0, then L BPP and that any language L AM has a bounded round zero-knowledge proof with knowledge tightness t(|x|) 2-2-O(|x|) under the assumption that collision intractable hash functions exist. This implies that in the case of a bounded round zero-knowledge proof for a language L BPP, the optimal knowledge tightness is "2" unless AM = BPP. In addition, we show that any language L IP has an unbounded round zero-knowledge proof with knowledge tightness t(|x|) 1.5 under the assumption that nonuniformly secure probabilistic encryptions exist.

  • A Note on AM Languages Outside NP co-NP

    Hiroki SHIZUYA  Toshiya ITOH  

     
    PAPER

      Vol:
    E77-A No:1
      Page(s):
    65-71

    In this paper we investigate the AM languages that seem to be located outside NP co-NP. We give two natural examples of such AM languages, GIP and GH, which stand for Graph Isomorphism Pattern and Graph Heterogeneity, respectively. We show that the GIP is in ΔP2 AM co-AM but is unlikely to be in NP co-NP, and that GH is in ΔP2 AM but is unlikely to be in NP co-AM. We also show that GIP is in SZK. We then discuss some structural properties related to those languages: Any language that is polynomial time truth-table reducible to GIP is in AM co-AM; GIP is in co-SZK if SZK co-SZK is closed under conjunctive polynomial time bounded-truth-table reducibility; Both GIP and GH are in DP. Here DP is the class of languages that can be expressed in the form X Y, where X NP and Y co-NP.

  • The Enhancement of Electromigration Lifetime under High Frequency Pulsed Conditions

    Kazunori HIRAOKA  Kazumitsu YASUDA  

     
    PAPER-Reliability Testing

      Vol:
    E77-A No:1
      Page(s):
    195-203

    Experimental evidence of a two-step enhancement in electromigration lifetime is presented through pulsed testing that extends over a wide frequency range from 7 mHz to 50 MHz. It is also found, through an accompanying failure analysis, that the failure mechanism is not affected by current pulsing. Test samples were the lowew metal lines and the through-holes in double-level interconnects. The same results were obtained for both samples. The testing temperature of the test conductor was determined considering the Joule heating to eliminate errors in lifetime estimation due to temperature errors. A two-step enhancement in lifetime is extracted by normalizing the pulsed electromigration lifetime by the continuous one. The first step occurs in the frequency range from 0.1 to 10 kHz where the lifetime increases with (duty ratio)-2 and the second step occurs above 100 kHz with (duty ratio)-3. The transition frequency in the first-step enhancement shifts to the higher frequency region with a decrease in stress temperature or an increase in current density, whereas the transition frequency in the second step is not affected by these stress conditions. The lifetime enhancement is analyzed in relation to the relaxation process during the current pulsing. According to the two-step behavior, two distinct relaxation times are assumed as opposed to the single relaxation time in other proposed models. The results of the analysis agree with the experimental results for the dependence on the frequency and duty ratio of pulses. The two experimentally derived relaxation times are about 5 s and 1 µs.

  • High Reliability Design Method of LC Tuning Circuit and Substantiation of Aging Characteristics for 20 Years

    Mitsugi SAITA  Tatsuo YOSHIE  Katsumi WATANABE  Kiyoshi MURAMORI  

     
    PAPER-Evaluation of Reliability Improvement

      Vol:
    E77-A No:1
      Page(s):
    213-219

    In 1963, the authors began to develop a tuning circuit (hereafter referred to as the 'circuit') consisting of an inductor, fixed capacitors and a variable capacitor. The circuit required very high accuracy and stability, and the aging influence on resonant frequency needed to be Δf/f0 0.12% for 20 years. When we started, there was no methodology available for designing such a long-term stable circuit, so we reinvestigated our previous studies concerning aging characteristics and formed a design concept. We designed the circuit by bearing in mind that an inductor was subject to natural and stress demagnetization (as indicated by disaccommodation), and assumed that a capacitor changed its characteristics linearly over a logarithmic scale of time. (This assumption was based on short-term test results derived from previous studies.) We measured the aging characteristics of the circuits at room temperature for 20 years, from 1966. The measurement results from the 20-year study revealed that the aging characteristics predicted by the design concept were reasonably accurate.

  • Continuous Relation between Models and System Performances--A Case Study for Optimal Servosystems--

    Hajime MAEDA  Shinzo KODAMA  

     
    PAPER-Control and Computing

      Vol:
    E77-A No:1
      Page(s):
    257-262

    This paper is concerned with the continuous relation between models of the plant and the predicted performances of the system designed based on the models. To state the problem more precisely, let P be the transfer matrix of a plant model, and let A be the transfer matrix of interest of the designed system, which is regarded as a performance measure for evaluating the designed responses. A depends upon P and is written as A=A(P). From the practical point of view, it is necessary that the function A(P) should be continuous with respect to P. In this paper we consider the linear quadratic optimal servosystem with integrators (LQI) scheme as the design methodology, and prove that A(P) depends continuously on the plant transfer matrix P if the topology of the family of plants models is the graph topology. A numerical example is given for illustrating the result.

  • Reforming the National Research Institutions in Japan

    Nobuyoshi FUGONO  

     
    INVITED PAPER

      Vol:
    E77-B No:1
      Page(s):
    1-4

    It is recognized in Japan that reformation of the national research institutions is urgently necessary. Present situation and constraints are shown and the action items are discussed.

  • A Method for Estimating the Mean-Squared Error of Distributed Arithmetic

    Jun TAKEDA  Shin-ichi URAMOTO  Masahiko YOSHIMOTO  

     
    PAPER-Digital Signal Processing

      Vol:
    E77-A No:1
      Page(s):
    272-280

    It is important for LSI system designers to estimate computational errors when designing LSI's for numeric computations. Both for the prediction of the errors at an early stage of designing and for the choice of a proper hardware configuration to achieve a target performance, it is desirable that the errors can be estimated in terms of a minimum of parameters. This paper presents a theoretical error analysis of multiply-accumulation implemented by distributed arithmetic(DA) and proposes a new method for estimating the mean-squared error. DA is a method of implementing the multiply-accumulation that is defined as an inner product of an input vector and a fixed coefficient vector. Using a ROM which stores partial products. DA calculates the output by accumulating the partial products bitserially. As DA uses no parallel multipliers, it needs a smaller chip area than methods using parallel multipliers. Thus DA is effectively utilitzed for the LSI implementation of a digital signal processing system which requires the multiply-accumulation. It has been known that, if the input data are uniformly distributed, the mean-squared error of the multiply-accumulation implemented by DA is a function of only the word lengths of the input, the output, and the ROM. The proposed method for the error estimation can calculate the mean-squared error by using the same parameters even when the input data are not uniformly distributed. The basic idea of the method is to regard the input data as a combination of uniformly distributed partial data with a different word length. Then the mean-squared error can be predicted as a weighted sum of the contribution of each partial data, where the weight is the ratio of the partial data to the total input data. Finally, the method is applied to a two-dimensional inverse discrete cosine transform (IDCT) and the practicability of the method is confirmed by computer simulations of the IDCT implemented by DA.

  • An Equivalence Net-Condition between Place-Liveness and Transition -Liveness of Petri Nets and Their Initial-Marking-Based Necessary and Sufficient Liveness Conditions

    Tadashi MATSUMOTO  Kohkichi TSUJI  

     
    PAPER-Graphs, Networks and Matroids

      Vol:
    E77-A No:1
      Page(s):
    291-301

    The structural necessary and sufficient condition for "the transition-liveness means the place-liveness and vice-versa" of a subclass NII of general Petri nets is given as "the place and transition live Petri net, or PTL net, ÑII". Furthermore, "the one-token-condition Petri net, or OTC net, II" which means that every MSDL (minimal structural deadlock) is "transition and place live" under at least one initial token, i.e., II is "transition and place live" under the above initial marking. These subclasses NII, ÑII( NII), and II(ÑII) are almost the general Petri nets except at least one MSTR(minimal structural trap) and at least one pair of "a virtual MSTR or a virtual STR" and "a virtual MSDL" of an MBTR (minimal behavioral trap) in connection with making an MSDL transition-live.

  • A Sign Test for Finding All Solutions of Piecewise-Linear Resistive Circuits

    Kiyotaka YAMAMURA  

     
    PAPER-Nonlinear Circuits and Systems

      Vol:
    E77-A No:1
      Page(s):
    317-323

    An efficient algorithm is presented for finding all solutions of piecewise-linear resistive circuits. In this algorithm, a simple sign test is performed to eliminate many linear regions that do not contain a solution. This makes the number of simultaneous linear equations to be solved much smaller. This test, in its original form, is applied to each linear region; but this is time-consuming because the number of linear regions is generally very large. In this paper, it is shown that the sign test can be applied to super-regions consisting of adjacent linear regions. Therefore, many linear regions are discarded at the same time, and the computational efficiency of the algorithm is substantially improved. The branch-and-bound method is used in applying the sign test to super-regions. Some numerical examples are given, and it is shown that all solutions are computed very rapidly. The proposed algorithm is simple, efficient, and can be easily programmed.

  • An Indexing Framework for Adaptive Arrangement of Mechanics Problems for ITS

    Tsukasa HIRASHIMA  Toshitada NIITSU  Kentaro HIROSE  Akihiro KASHIHARA  Jun'ichi TOYODA  

     
    PAPER

      Vol:
    E77-D No:1
      Page(s):
    19-26

    This paper describes an indexing framework for adaptive arrangement of mechanics problems in ITS (Intelligent Tutoring System). There have been some studies for adaptive arrangement of problems in ITS. However, they only choose a solution method in order to characterize a problem used in the practice. Because their target domains have been sufficiently formalized, this kind of characterization has sufficed to describe the relations between any two problems of such a class. In other words, here, it is enough to make students understand only the solution methods for the given class of problems. However, in other domains, it is also important to understand concepts used in the problems and not only to understand solution methods. In mechanics problems, concepts such as mechanical objects, their attributes, and phenomena composed of the objects and the attributes also need to be taught. Therefore, the difference between solution methods applied is not sufficient to describe the difference between two given problems. To use this type of problems properly in the practice, it is necessary to propose an advanced new characterization framework. In this paper, we describe a mechanics problem with three components: (1) surface structure, (2) phenomenon structure, (3) solution structure. Surface structure describes surface features of a problem with mechanical objects, their configuration, and each object's attributes given or required in the problem. Phenomenon structure is described by attributes and operational relations among them included in the phenomenon specific to the surface structure. Solution structure is described by a sequence of operational relations which compute required attributes from given attributes. We call this characterizing indexing because we use it as index of each problem. This paper also describes an application of the indexing to arrangement of problems. We propose two mechanisms of control: (a) reordering of a problem sequence, and (b) simplifying of a problem. By now, we have implemented basic functions to realize the mechanisms except for the part of interface.

16281-16300hit(16991hit)