The search functionality is under construction.

Keyword Search Result

[Keyword] reliability(282hit)

201-220hit(282hit)

  • New Self-Healing Scheme that Realizes Multiple Reliability on ATM Networks

    Taishi YAHARA  Ryutaro KAWAMURA  

     
    PAPER-Switching

      Vol:
    E83-B No:12
      Page(s):
    2615-2625

    This paper proposes a new restoration concept for ATM networks. It realizes the rapid and multiple reliability/cost level restoration required to support many different network services. First, the necessity in realizing rapid and multiple-reliability-level restoration in the future network is shown. The self-healing schemes that is based on distributed restoration mechanism satisfies the rapidity in restoration, but does not satisfy multiple reliability levels. Thus a new self-healing scheme that satisfies them is presented and a Multiple Reliability Level Virtual Path network concept is proposed based on the new self-healing scheme. Next, how to realize the new self-healing scheme is explained as an extension of the existing self-healing scheme with two new simple functions. Finally, evaluations confirm the effectiveness of the proposed scheme. These results show that the proposed new scheme realizes a network that fulfills the rapidity and multiple reliability requirements that are strongly required.

  • Optimal Grid Pattern for Automated Camera Calibration Using Cross Ratio

    Chikara MATSUNAGA  Yasushi KANAZAWA  Kenichi KANATANI  

     
    PAPER-Image Processing

      Vol:
    E83-A No:10
      Page(s):
    1921-1928

    With a view to virtual studio applications, we design an optimal grid pattern such that the observed image of a small portion of it can be matched to its corresponding position in the pattern easily. The grid shape is so determined that the cross ratio of adjacent intervals is different everywhere. The cross ratios are generated by an optimal Markov process that maximizes the accuracy of matching. We test our camera calibration system using the resulting grid pattern in a realistic setting and show that the performance is greatly improved by applying techniques derived from the designed properties of the pattern.

  • On the Relation between Viterbi Decoding with Labels and the SOVA

    Masato TAJIMA  Keiji TAKIDA  Zenshiro KAWASAKI  

     
    LETTER-Coding Theory

      Vol:
    E83-A No:10
      Page(s):
    1966-1970

    Both Viterbi decoding with labels (i.e., the Yamamoto-Itoh scheme) and the soft-output Viterbi algorithm (SOVA) evaluate the metric difference between the maximum-likelihood (ML) path and the discarded path at each level in the trellis. Noting this fact, we show that the former scheme also provides information about the reliability values for decoded information bits.

  • Error Exponent for Coding of Memoryless Gaussian Sources with a Fidelity Criterion

    Shunsuke IHARA  Masashi KUBO  

     
    PAPER-Source Coding and Data Compression

      Vol:
    E83-A No:10
      Page(s):
    1891-1897

    We are interesting in the error exponent for source coding with fidelity criterion. For each fixed distortion level Δ, the maximum attainable error exponent at rate R, as a function of R, is called the reliability function. The minimum rate achieving the given error exponent is called the minimum achievable rate. For memoryless sources with finite alphabet, Marton (1974) gave an expression of the reliability function. The aim of the paper is to derive formulas for the reliability function and the minimum achievable rate for memoryless Gaussian sources.

  • A Discrete Gompertz Equation and a Software Reliability Growth Model

    Daisuke SATOH  

     
    PAPER-Software Engineering

      Vol:
    E83-D No:7
      Page(s):
    1508-1513

    I describe a software reliability growth model that yields accurate parameter estimates even with a small amount of input data. The model is based on a proposed discrete analog of a Gompertz equation that has an exact solution. The difference equation tends to a differential equation on which the Gompertz curve model is defined, when the time interval tends to zero. The exact solution also tends to the exact solution of the differential equation when the time interval tends to zero. The discrete model conserves the characteristics of the Gompertz model because the difference equation has an exact solution. Therefore, the proposed model provides accurate parameter estimates, making it possible to predict in the early test phase when software can be released.

  • Optimal Homography Computation with a Reliability Measure

    Kenichi KANATANI  Naoya OHTA  Yasushi KANAZAWA  

     
    PAPER

      Vol:
    E83-D No:7
      Page(s):
    1369-1374

    We describe a theoretically optimal algorithm for computing the homography between two images. First, we derive a theoretical accuracy bound based on a mathematical model of image noise and do simulation to confirm that our renormalization technique effectively attains that bound. Then, we apply our technique to mosaicing of images with small overlaps. By using real images, we show how our algorithm reduces the instability of the image mapping.

  • A Generalization of Consecutive k-out-of-n:G Systems

    Min-Sheng LIN  Ming-Sang CHANG  Deng-Jyi CHEN  

     
    LETTER-Fault Tolerance

      Vol:
    E83-D No:6
      Page(s):
    1309-1313

    A generalized class of consecutive-k-out-of-n:G systems, referred to as Con/k*/n:G systems, is studied. A Con/k*/n:G system has n ordered components and is good if and only if ki good consecutive components that originate at component i are all good, where ki is a function of i. Theorem 1 gives an O(n) time equation to compute the reliability of a linear system and Theorem 2 gives an O(n2) time equation for a circular system. A distributed computing system with a linear (ring) topology is an example of such system. This application is very important, since for other classes of topologies, such as general graphs, planar graphs, series-parallel graphs, tree graphs, and star graphs, this problem has been proven to be NP-hard.

  • Optimum Order Time for a Spare Part Inventory System Modeled by a Non-Regenerative Stochastic Petri Net

    Qun JIN  Richard F. VIDALE  Yoshio SUGASAWA  

     
    PAPER

      Vol:
    E83-A No:5
      Page(s):
    818-827

    We determine the optimum time TOPT to order a spare part for a system before the part in operation has failed. TOPT is a function of the part's failure-time distribution, the lead (delivery) time of the part, its inventory cost, and the cost of downtime while waiting delivery. The probabilities of the system's up and down states are obtained from a non-regenerative stochastic Petri net. TOPT is found by minimizing E[cost], the expected cost of inventory and downtime. Three cases are compared: 1) Exponential order and lead times, 2) Deterministic order time and exponential lead time, and 3) Deterministic order and lead times. In Case 1, it is shown analytically that, depending on the ratio of inventory to downtime costs, the optimum policy is one of three: order a spare part immediately at t = 0, wait until the part in operation fails, or order before failure at TOPT > 0. Numerical examples illustrate the three cases.

  • On the Concept of "Stability" in Asynchronous Distributed Decision-Making Systems

    Tony S. LEE  Sumit GHOSH  

     
    PAPER-Real Time Control

      Vol:
    E83-B No:5
      Page(s):
    1023-1038

    Asynchronous, distributed, decision-making (ADDM) systems constitute a special class of distributed problems and are characterized as large, complex systems wherein the principal elements are the geographically-dispersed entities that communicate among themselves, asynchronously, through message passing and are permitted autonomy in local decision-making. A fundamental property of ADDM systems is stability that refers to their behavior under representative perturbations to their operating environments, given that such systems are intended to be real, complex, and to some extent, mission critical systems, and are subject to unexpected changes in their operating conditions. ADDM systems are closely related to autonomous decentralized systems (ADS) in the principal elements, the difference being that the characteristics and boundaries of ADDM systems are defined rigorously. This paper introduces the concept of stability in ADDM systems and proposes an intuitive yet practical and usable definition that is inspired by those used in Control Systems and Physics. A comprehensive stability analysis on an accurate simulation model will provide the necessary assurance, with a high level of confidence, that the system will perform adequately. An ADDM system is defined as a stable system if it returns to a steady-state in finite time, following perturbation, provided that it is initiated in a steady-state. Equilibrium or steady-state is defined through placing bounds on the measured error in the system. Where the final steady-state is equivalent to the initial one, a system is referred to as strongly stable. If the final steady-state is potentially worse then the initial one, a system is deemed marginally stable. When a system fails to return to steady-state following the perturbation, it is unstable. The perturbations are classified as either changes in the input pattern or changes in one or more environmental characteristics of the system such as hardware failures. Thus, the key elements in the study of stability include steady-state, perturbations, and stability. Since the development of rigorous analytical models for most ADDM systems is difficult, if not impossible, the definitions of the key elements, proposed in this paper, constitute a general framework to investigate stability. For a given ADDM system, the definitions are based on the performance indices that must be judiciously identified by the system architect and are likely to be unique. While a comprehensive study of all possible perturbations is too complex and time consuming, this paper focuses on a key subset of perturbations that are important and are likely to occur with greater frequency. To facilitate the understanding of stability in representative real-world systems, this paper reports the analysis of two basic manifestations of ADDM systems that have been reported in the literature --(i) a decentralized military command and control problem, MFAD, and (ii) a novel distributed algorithm with soft reservation for efficient scheduling and congestion mitigation in railway networks, RYNSORD. Stability analysis of MFAD and RYNSORD yields key stable and unstable conditions.

  • Markovian Software Availability Measurement Based on the Number of Restoration Actions

    Koichi TOKUNO  Shigeru YAMADA  

     
    PAPER

      Vol:
    E83-A No:5
      Page(s):
    835-841

    In this paper, we construct a software availability model considering the number of restoration actions. We correlate the failure and restoration characteristics of the software system with the cumulative number of corrected faults. Furthermore, we consider an imperfect debugging environment where the detected faults are not always corrected and removed from the system. The time-dependent behavior of the system alternating between up and down states is described by a Markov process. From this model, we can derive quantitative measures for software availability assessment considering the number of restoration actions. Finally, we show numerical examples of software availability analysis.

  • The Effective Smoothing Technique to Estimate the Optimal Software Release Schedule Based on Artificial Neural Network

    Tadashi DOHI  Yoshifumi YATSUNAMI  Yasuhiko NISHIO  Shunji OSAKI  

     
    PAPER

      Vol:
    E83-A No:5
      Page(s):
    796-803

    In this paper, we develop an effective smoothing technique to estimate the optimal software release schedule which minimizes the total software cost. The optimal software release problem is essentially reduced to a statistical estimation problem for the software failure rate, but the resulting estimator based on both the fault-detection time data observed in testing phase and its estimate in future is discontinuous and does not always function well for determining the optimal release schedule. We estimate the smoothed software failure rate using the usual quadratic programming approach and generate the optimal software release schedule with higher accuracy.

  • Parallelizing SDP (Sum of Disjoint Products) Algorithms for Fast Reliability Analysis

    Tatsuhiro TSUCHIYA  Tomoya KAJIKAWA  Tohru KIKUNO  

     
    LETTER-Fault Tolerance

      Vol:
    E83-D No:5
      Page(s):
    1183-1186

    The SDP (Sum of Disjoint Products) approach is a well-known technique for computing network reliability measures. So far several algorithms have been developed based on this approach. In this letter, we present a general framework for parallelization of these SDP algorithms. Based on the framework, we implemented a parallel version of an SDP algorithm called CAREL on a network of workstations. Experimental results show that it works fairly well with almost linear speedups.

  • Computing the Invariant Polynomials of Graphs, Networks and Matroids

    Hiroshi IMAI  

     
    INVITED SURVEY PAPER-Algorithms for Matroids and Related Discrete Systems

      Vol:
    E83-D No:3
      Page(s):
    330-343

    The invariant polynomials of discrete systems such as graphs, matroids, hyperplane arrangements, and simplicial complexes, have been theoretically investigated actively in recent years. These invariants include the Tutte polynomial of a graph and a matroid, the chromatic polynomial of a graph, the network reliability of a network, the Jones polynomial of a link, the percolation function of a grid, etc. The computational complexity issues of computing these invariants have been studied and most of them are shown to be #P-complete. But, these complexity results do not imply that we cannot compute the invariants of a given instance of moderate size in practice. To meet large demand of computing these invariants in practice, there have been proposed a framework of computing the invariants by using the binary decision diagrams (BDD for short). This provides mildly exponential algorithms which are useful to solve moderate-size practical problems. This paper surveys the BDD-based approach to computing the invariants, together with some computational results showing the usefulness of the framework.

  • Sensitivity of the System Capacity with Respect to the System Reliability in a DS-CDMA Cellular System

    Insoo KOO  Gwangzeen KO  Yeongyoon CHOI  Kiseon KIM  

     
    LETTER-Mobile Communication

      Vol:
    E83-B No:3
      Page(s):
    742-745

    One of the most important capacity parameters in the DS-CDMA cellular systems is the system reliability on which the reverse link capacity is usually limited by a prescribed lower bound. In this letter, the effect of the system reliability as well as imperfection of the power control on the system capacity is quantitatively considered using sensitivity analysis in a multimedia DS-CDMA cellular system. As a result, an analytical close-form formula is presented in terms of the standard deviation of the received SIR and the system reliability. In a numerical example, sensitivity with respect to the system reliability on the system capacity has the value ranging from 5 to 50 between 95% and 99% the range in we are interested.

  • Digital Watermarking Technique for Motion Pictures Based on Quantization

    Hiroshi OGAWA  Takao NAKAMURA  Atsuki TOMIOKA  Youichi TAKASHIMA  

     
    PAPER

      Vol:
    E83-A No:1
      Page(s):
    77-89

    A quantization-based watermarking system for motion pictures is proposed. In particular, methods for improving the image quality of watermarked video, the watermarking data tolerance, and the accuracy of watermark data detection are described. A quantitative evaluation of the reliability of watermarked data, which has not generally been discussed up to now, is also performed.

  • Evaluation of Two Load-Balancing Primary-Backup Process Allocation Schemes

    Heejo LEE  Jong KIM  Sung Je HONG  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E82-D No:12
      Page(s):
    1535-1544

    In this paper, we show two process allocation schemes to tolerate multiple faults when the primary-backup replication method is used. The first scheme, called multiple backup scheme, is running multiple backup processes for each process to tolerate multiple faults. The second scheme, called regenerative backup scheme, is running only one backup process for each process, but re-generates backup processes for processes that do not have a backup process after a fault occurrence to keep the primary-backup process pair available. In both schemes, we propose heuristic process allocation methods for balancing loads in spite of the occurrence of faults. Then we evaluate and compare the performance of the proposed heuristic process allocation methods using simulation. Next, we analyze the reliability of two schemes based on their fault-tolerance capability. For the analysis of fault-tolerance capability, we find the degree of fault tolerance for each scheme. Then we find the reliability of each scheme using Markov chains. The comparison results of two schemes indicate that the regenerative single backup process allocation scheme is more suitable than the multiple backup allocation scheme.

  • Reliability of AlGaAs and InGaP Heterojunction Bipolar Transistors

    Noren PAN  Roger E. WELSER  Charles R. LUTZ  James ELLIOT  Jesse P. RODRIGUES  

     
    INVITED PAPER-RF Power Devices

      Vol:
    E82-C No:11
      Page(s):
    1886-1894

    Heterojunction bipolar transistors (HBTs) are key devices for a variety of applications including L-band power amplifiers, high speed A/D converters, broadband amplifiers, laser drivers, and low phase noise oscillators. AlGaAs emitter HBTs have demonstrated sufficient reliability for L-band mobile phone applications. For applications which require extended reliability performance at high junction temperatures (>250) and large current densities (>50 kA/cm2), InGaP emitter HBTs are the preferred devices. The excellent reliability of InGaP/GaAs HBTs has been confirmed at various laboratories. At a moderate current density and junction temperature, Jc = 25 kA/cm2 and Tj = 264, no device failures were reported out to 10,000 hours in a sample of 10 devices. Reliability testing performed up to a junction temperature of 360 and at a higher current density (Jc = 60 kA/cm2) showed an extrapolated MTTF of 5 105 hours at Tj = 150. The activation energy for AlGaAs/GaAs HBTs was 0.57 eV, while the activation energy for InGaP/GaAs HBTs was 0.68 eV, which indicated a similar failure mechanism for both devices.

  • Comments on Simplification of the BCJR Algorithm Using the Bidirectional Viterbi Algorithm

    Masato TAJIMA  Keiji TAKIDA  Zenshiro KAWASAKI  

     
    LETTER-Information Theory and Coding Theory

      Vol:
    E82-A No:10
      Page(s):
    2306-2310

    In this paper, we state some noteworthy facts in connection with simplification of the BCJR algorithm using the bidirectional Viterbi algorithm (BIVA). That is, we clarify the necessity of metric correction in the case that the BIVA is applied to reliability estimation, where information symbols uj obey non-uniform probability distributions.

  • Analysis of Tradeoffs between Efficiency, Power and Hot-Electron Reliability in GaAs MESFETs

    Yevgeniy A. TKACHENKO  Ce-Jun WEI  Aleksei P. KLIMASHOV  Dylan BARTLE  

     
    PAPER-Active Devices and Circuits

      Vol:
    E82-C No:7
      Page(s):
    1061-1066

    Tradeoffs between efficiency, power and reliability were analyzed for the GaAs MESFETs with variable recess structures. The MESFET process can be optimized for either best power/efficiency performance or best reliability by varying the width of the first recess. If the first recess width is increased by 0.4 µm, an estimated order of magnitude increase in device lifetime, limited by hot-electron-induced degradation, can be achieved at the expense of 3% in power-added efficiency and 20 mW/mm in output power. The reported hot-electron reliability highlights include maximum sustainable reverse gate current stress of 100 mA/mm and (Stress)(Lifetime) figure of merit of 12.5 Ahr/cm which advances the present state of the art by approximately an order of magnitude. The introduced (Stress)(Lifetime) figure of merit is essential for design-for-reliability of high efficiency power amplifiers.

  • The Distributed Program Reliability Analysis on a Star Topology: Efficient Algorithms and Approximate Solution

    Ming-Sang CHANG  Deng-Jyi CHEN  Min-Sheng LIN  Kuo-Lung KU  

     
    PAPER-Software Theory

      Vol:
    E82-D No:6
      Page(s):
    1020-1029

    A distributed computing system consists of processing elements, communication links, memory units, data files, and programs. These resources are interconnected via a communication network and controlled by a distributed operating system. The distributed program reliability (DPR) in a distributed computing system is the probability that a program which runs on multiple processing elements and needs to retrieve data files from other processing elements will be executed successfully. This reliability varies according to 1) the topology of the distributed computing system, 2) the reliability of the communication edges, 3) the data files and programs distribution among processing elements, and 4) the data files required to execute a program. In this paper, we show that computing the distributed program reliability on a star distributed computing system is #P-complete. A polynomially solvable case is developed for computing the distributed program reliability when some additional file distribution is restricted on the star topology. We also propose a polynomial time algorithm for computing the distributed program reliability with approximate solutions when the star topology has no the additional file distribution.

201-220hit(282hit)