The search functionality is under construction.

Keyword Search Result

[Keyword] reliability(282hit)

21-40hit(282hit)

  • Low-Complexity Joint Transmit and Receive Antenna Selection for Transceive Spatial Modulation

    Junshan LUO  Shilian WANG  Qian CHENG  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2019/02/12
      Vol:
    E102-B No:8
      Page(s):
    1695-1704

    Joint transmit and receive antenna selection (JTRAS) for transceive spatial modulation (TRSM) is investigated in this paper. A couple of low-complexity and efficient JTRAS algorithms are proposed to improve the reliability of TRSM systems by maximizing the minimum Euclidean distance (ED) among all received signals. Specifically, the QR decomposition based ED-JTRAS achieves near-optimal error performance with a moderate complexity reduction as compared to the optimal ED-JTRAS method. The singular value decomposition based ED-JTRAS achieves sub-optimal error performance with a significant complexity reduction. Simulation results show that the proposed methods remarkably improve the system reliability in both uncorrelated and spatially correlated Rayleigh fading channels, as compared to the conventional norm based JTRAS method.

  • Security Performance Analysis for Relay Selection in Cooperative Communication System under Nakagami-m Fading Channel

    Guangna ZHANG  Yuanyuan GAO  Huadong LUO  Nan SHA  Shijie WANG  Kui XU  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2018/09/14
      Vol:
    E102-B No:3
      Page(s):
    603-612

    In this paper, we investigate a cooperative communication system comprised of a source, a destination, and multiple decode-and-forward (DF) relays in the presence of a potential malicious eavesdropper is within or without the coverage area of the source. Based on the more general Nakagami-m fading channels, we analyze the security performance of the single-relay selection and multi-relay selection schemes for protecting the source against eavesdropping. In the single-relay selection scheme, only the best relay is chosen to assist in the source transmission. Differing from the single-relay selection, multi-relay selection scheme allows multiple relays to forward the source to the destination. We also consider the classic direct transmission as a benchmark scheme to compare with the two relay selection schemes. We derive the exact closed-form expressions of outage probability (OP) and intercept probability (IP) for the direct transmission, the single-relay selection as well as the multi-relay selection scheme over Nakagami-m fading channel when the eavesdropper is within and without the coverage area of the source. Moreover, the security-reliability tradeoff (SRT) of these three schemes are also analyzed. It is verified that the SRT of the multi-relay selection consistently outperforms the single-relay selection, which of both the single-relay and multi-relay selection schemes outperform the direct transmission when the number of relays is large, no matter the eavesdropper is within or without the coverage of the source. In addition, as the number of DF relays increases, the SRT of relay selection schemes improve notably. However, the SRT of both two relay selection approaches become worse when the eavesdropper is within the coverage area of the source.

  • Kink Suppression and High Reliability of Asymmetric Dual Channel Poly-Si Thin Film Transistors for High Voltage Bias Stress

    Joonghyun PARK  Myunghun SHIN  

     
    BRIEF PAPER-Semiconductor Materials and Devices

      Vol:
    E102-C No:1
      Page(s):
    95-98

    Asymmetrically designed polycrystalline silicon (poly-Si) thin film transistors (TFT) were fabricated and investigated to suppress kink effect and to improve electrical reliability. Asymmetric dual channel length poly-Si TFT (ADCL) shows the best reduction of kink and leakage currents. Technology computer-aided design simulation proves that ADCL can induce properly high voltage at floating node of the TFT at high drain-source voltage (VDS), which can mitigate the impact ionization and the degradation of the transconductance of the TFT showing high reliability under the hot carrier stress.

  • Avoiding Performance Impacts by Re-Replication Workload Shifting in HDFS Based Cloud Storage

    Thanda SHWE  Masayoshi ARITSUGI  

     
    PAPER-Cloud Computing

      Pubricized:
    2018/09/18
      Vol:
    E101-D No:12
      Page(s):
    2958-2967

    Data replication in cloud storage systems brings a lot of benefits, such as fault tolerance, data availability, data locality and load balancing both from reliability and performance perspectives. However, each time a datanode fails, data blocks stored on the failed datanode must be restored to maintain replication level. This may be a large burden for the system in which resources are highly utilized with users' application workloads. Although there have been many proposals for replication, the approach of re-replication has not been properly addressed yet. In this paper, we present a deferred re-replication algorithm to dynamically shift the re-replication workload based on current resource utilization status of the system. As workload pattern varies depending on the time of the day, simulation results from synthetic workload demonstrate a large opportunity for minimizing impacts on users' application workloads with the simple algorithm that adjusts re-replication based on current resource utilization. Our approach can reduce performance impacts on users' application workloads while ensuring the same reliability level as default HDFS can provide.

  • Fast Algorithm for Optimal Arrangement in Connected-(m-1, n-1)-out-of-(m, n):F Lattice System

    Taishin NAKAMURA  Hisashi YAMAMOTO  Tomoaki AKIBA  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Vol:
    E101-A No:12
      Page(s):
    2446-2453

    An optimal arrangement problem involves finding a component arrangement to maximize system reliability, namely, the optimal arrangement. It is useful to obtain the optimal arrangement when we design a practical system. An existing study developed an algorithm for finding the optimal arrangement of a connected-(r, s)-out-of-(m, n): F lattice system with r=m-1 and n<2s. However, the algorithm is time-consuming to find the optimal arrangement of a system having many components. In this study, we develop an algorithm for efficiently finding the optimal arrangement of the system with r=m-1 and s=n-1 based on the depth-first branch-and-bound method. In the algorithm, before enumerating arrangements, we assign some components without computing the system reliability. As a result, we can find the optimal arrangement effectively because the number of components which must be assigned decreases. Furthermore, we develop an efficient method for computing the system reliability. The numerical experiment demonstrates the effectiveness of our proposed algorithm.

  • Mitigating Pilot Contamination in Massive MIMO Using Cell Size Reduction

    Parfait I. TEBE  Yujun KUANG  Affum E. AMPOMA  Kwasi A. OPARE  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/10/24
      Vol:
    E101-B No:5
      Page(s):
    1280-1290

    In this paper, we provide a novel solution to mitigate pilot contamination in massive MIMO technology. In the proposed approach, we consider seven copilot cells of the first layer of interfering cells of a cellular network. We derive and formulate the worst-case signal-to-interference power ratio (SIR) of a typical user in both downlink and uplink of a pilot contaminated cell. Based on the formulated SIR and other considerations of the system, the total pilot sequence length, the reliability of channel estimation within the cell, the spectral and energy efficiencies are derived and formulated in downlink. The user's transmit power and the achievable sum rate are also derived and formulated in uplink. Our results show that when the cell size is reduced the pilot contamination is significantly mitigated and hence the system performance is improved.

  • Reliability Analysis of Scaled NAND Flash Memory Based SSDs with Real Workload Characteristics by Using Real Usage-Based Precise Reliability Test

    Yusuke YAMAGA  Chihiro MATSUI  Yukiya SAKAKI  Ken TAKEUCHI  

     
    PAPER

      Vol:
    E101-C No:4
      Page(s):
    243-252

    In order to reduce the memory cell errors in real-usage of NAND flash-based SSD, real usage-based precise reliability test for NAND flash of SSDs has been proposed. Reliability of the NAND flash memories of the SSDs is seriously degraded as the scaling of memory cells. However, conventional simple reliability tests of read-disturb and data-retention cannot give the same result as the real-life VTH shift and memory cell errors. To solve this problem, the proposed reliability test precisely reproduces the real memory cell failures by emulating the complicated read, write, and data-retention with SSD emulator. In this paper, the real-life VTH shift and memory cell errors between two generations of NAND flash memory with different characterized real workloads are provided. Using the proposed test method, 1.6-times BER difference is observed when write-cold and read-hot workload (hm_1) and write-hot and read-hot workload (prxy_1) are compared in 1Ynm MLC NAND flash. In addition, by NAND flash memory scaling from 1Xnm to 1Ynm generations, the discrepancy of error numbers between the conventional reliability test result and actual reliability measured by proposed reliability test is increased by 6.3-times. Finally, guidelines for read reference voltage shifts and strength of ECCs are given to achieve high memory cell reliability for various workloads.

  • Using Hierarchical Scenarios to Predict the Reliability of Component-Based Software

    Chunyan HOU  Jinsong WANG  Chen CHEN  

     
    PAPER-Software Engineering

      Pubricized:
    2017/11/07
      Vol:
    E101-D No:2
      Page(s):
    405-414

    System scenarios that derived from system design specification play an important role in the reliability engineering of component-based software systems. Several scenario-based approaches have been proposed to predict the reliability of a system at the design time, most of them adopt flat construction of scenarios, which doesn't conform to software design specifications and is subject to introduce state space explosion problem in the large systems. This paper identifies various challenges related to scenario modeling at the early design stages based on software architecture specification. A novel scenario-based reliability modeling and prediction approach is introduced. The approach adopts hierarchical scenario specification to model software reliability to avoid state space explosion and reduce computational complexity. Finally, the evaluation experiment shows the potential of the approach.

  • Efficient Aging-Aware Failure Probability Estimation Using Augmented Reliability and Subset Simulation

    Hiromitsu AWANO  Takashi SATO  

     
    PAPER

      Vol:
    E100-A No:12
      Page(s):
    2807-2815

    A circuit-aging simulation that efficiently calculates temporal change of rare circuit-failure probability is proposed. While conventional methods required a long computational time due to the necessity of conducting separate calculations of failure probability at each device age, the proposed Monte Carlo based method requires to run only a single set of simulation. By applying the augmented reliability and subset simulation framework, the change of failure probability along the lifetime of the device can be evaluated through the analysis of the Monte Carlo samples. Combined with the two-step sample generation technique, the proposed method reduces the computational time to about 1/6 of that of the conventional method while maintaining a sufficient estimation accuracy.

  • Replication of Random Telegraph Noise by Using a Physical-Based Verilog-AMS Model

    Takuya KOMAWAKI  Michitarou YABUUCHI  Ryo KISHIDA  Jun FURUTA  Takashi MATSUMOTO  Kazutoshi KOBAYASHI  

     
    PAPER

      Vol:
    E100-A No:12
      Page(s):
    2758-2763

    As device sizes are downscaled to nanometer, Random Telegraph Noise (RTN) becomes dominant. It is indispensable to accurately estimate the effect of RTN. We propose an RTN simulation method for analog circuits. It is based on the charge trapping model. The RTN-induced threshold voltage fluctuation are replicated to attach a variable DC voltage source to the gate of a MOSFET by using Verilog-AMS. In recent deca-nanometer processes, high-k (HK) materials are used in gate dielectrics to decrease the leakage current. We must consider the defect distribution characteristics both in HK and interface layer (IL). This RTN model can be applied to the bimodal model which includes characteristics of the HK and IL dielectrics. We confirm that the drain current of MOSFETs temporally fluctuates in circuit-level simulations. The fluctuations of RTN are different in MOSFETs. RTN affects the frequency characteristics of ring oscillators (ROs). The distribution of RTN-induced frequency fluctuations has a long-tail in a HK process. The RTN model applied to the bimodal can replicate a long-tail distribution. Our proposed method can estimate the temporal impact of RTN including multiple transistors.

  • An Incremental Simulation Technique Based on Delta Model for Lifetime Yield Analysis

    Nguyen Cao QUI  Si-Rong HE  Chien-Nan Jimmy LIU  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E100-A No:11
      Page(s):
    2370-2378

    As devices continue to shrink, the parameter shift due to process variation and aging effects has an increasing impact on the circuit yield and reliability. However, predicting how long a circuit can maintain its design yield above the design specification is difficult because the design yield changes during the aging process. Moreover, performing Monte Carlo (MC) simulation iteratively during aging analysis is infeasible. Therefore, most existing approaches ignore the continuity during simulations to obtain high speed, which may result in accumulation of extrapolation errors with time. In this paper, an incremental simulation technique is proposed for lifetime yield analysis to improve the simulation speed while maintaining the analysis accuracy. Because aging is often a gradual process, the proposed incremental technique is effective for reducing the simulation time. For yield analysis with degraded performance, this incremental technique also reduces the simulation time because each sample is the same circuit with small parameter changes in the MC analysis. When the proposed dynamic aging sampling technique is employed, 50× speedup can be obtained with almost no decline accuracy, which considerably improves the efficiency of lifetime yield analysis.

  • A Novel Component Ranking Method for Improving Software Reliability

    Lixing XUE  Decheng ZUO  Zhan ZHANG  Na WU  

     
    LETTER-Dependable Computing

      Pubricized:
    2017/07/24
      Vol:
    E100-D No:10
      Page(s):
    2653-2658

    This paper proposes a component ranking method to identify important components which have great impact on the system reliability. This method, which is opposite to an existing method, believes components which frequently invoke other components have more impact than others and employs component invocation structures and invocation frequencies for making important component ranking. It can strongly support for improving the reliability of software systems, especially large-scale systems. Extensive experiments are provided to validate this method and draw performance comparison.

  • Task Scheduling Based Redundant Task Allocation Method for the Multi-Core Systems with the DTTR Scheme

    Hiroshi SAITO  Masashi IMAI  Tomohiro YONEDA  

     
    PAPER

      Vol:
    E100-A No:7
      Page(s):
    1363-1373

    In this paper, we propose a redundant task allocation method for multi-core systems based on the Duplication with Temporary Triple-Modular Redundancy and Reconfiguration (DTTR) scheme. The proposed method determines task allocation of a given task graph to a given multi-core system model from task scheduling in given fault patterns. Fault patterns defined in this paper consist of a set of faulty cores and a set of surviving cores. To optimize the average failure rate of the system, task scheduling minimizes the execution time of the task graph preserving the property of the DTTR scheme. In addition, we propose a selection method of fault patterns to be scheduled to reduce the task allocation time. In the experiments, at first, we evaluate the proposed selection method of fault patterns in terms of the task allocation time. Then, we compare the average failure rate among the proposed method, a task allocation method which packs tasks into particular cores as much as possible, a task allocation method based on Simulated Annealing (SA), a task allocation method based on Integer Linear Programming (ILP), and a task allocation method based on task scheduling without considering the property of the DTTR scheme. The experimental results show that task allocation by the proposed method results in nearly the same average failure rate by the SA based method with shorter task allocation time.

  • Utilization of Path-Clustering in Efficient Stress-Control Gate Replacement for NBTI Mitigation

    Shumpei MORITA  Song BIAN  Michihiro SHINTANI  Masayuki HIROMOTO  Takashi SATO  

     
    PAPER

      Vol:
    E100-A No:7
      Page(s):
    1464-1472

    Replacement of highly stressed logic gates with internal node control (INC) logics is known to be an effective way to alleviate timing degradation due to NBTI. We propose a path clustering approach to accelerate finding effective replacement gates. Upon the observation that there exist paths that always become timing critical after aging, critical path candidates are clustered to select representative path in each cluster. With efficient data structure to further reduce timing calculation, INC logic optimization has first became tractable in practical time. Through the experiments using a processor, 171x speedup has been demonstrated while retaining almost the same level of mitigation gain.

  • Reliability Function and Strong Converse of Biometrical Identification Systems Based on List-Decoding

    Vamoua YACHONGKA  Hideki YAGI  

     
    LETTER-Information Theory

      Vol:
    E100-A No:5
      Page(s):
    1262-1266

    The biometrical identification system, introduced by Willems et al., is a system to identify individuals based on their measurable physical characteristics. Willems et al. characterized the identification capacity of a discrete memoryless biometrical identification system from information theoretic perspectives. Recently, Mori et al. have extended this scenario to list-decoding whose list size is an exponential function of the data length. However, as the data length increases, how the maximum identification error probability (IEP) behaves for a given rate has not yet been characterized for list-decoding. In this letter, we investigate the reliability function of the system under fixed-size list-decoding, which is the optimal exponential behavior of the maximum IEP. We then use Arimoto's argument to analyze a lower bound on the maximum IEP with list-decoding when the rate exceeds the capacity, which leads to the strong converse theorem. All results are derived under the condition that an unknown individual need not be uniformly distributed and the identification process is done without the knowledge of the prior distribution.

  • Computing K-Terminal Reliability of Circular-Arc Graphs

    Chien-Min CHEN  Min-Sheng LIN  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2016/09/06
      Vol:
    E99-D No:12
      Page(s):
    3047-3052

    Let G be a graph and K be a set of target vertices of G. Assume that all vertices of G, except the vertices in K, may fail with given probabilities. The K-terminal reliability of G is the probability that all vertices in K are mutually connected. This reliability problem is known to be #P-complete for general graphs. This work develops the first polynomial-time algorithm for computing the K-terminal reliability of circular-arc graphs.

  • Probabilistic Analysis of the Network Reliability Problem on Random Graph Ensembles

    Akiyuki YANO  Tadashi WADAYAMA  

     
    PAPER-Networks and Network Coding

      Vol:
    E99-A No:12
      Page(s):
    2218-2225

    In the field of computer science, the network reliability problem for evaluating the network failure probability has been extensively investigated. For a given undirected graph G, the network failure probability is the probability that edge failures (i.e., edge erasures) make G unconnected. Edge failures are assumed to occur independently with the same probability. The main contributions of the present paper are the upper and lower bounds on the expected network failure probability. We herein assume a simple random graph ensemble that is closely related to the Erds-Rényi random graph ensemble. These upper and lower bounds exhibit the typical behavior of the network failure probability. The proof is based on the fact that the cut-set space of G is a linear space over F2 spanned by the incident matrix of G. The present study shows a close relationship between the ensemble analysis of the expected network failure probability and the ensemble analysis of the error detection probability of LDGM codes with column weight 2.

  • Reliability-Enhanced ECC-Based Memory Architecture Using In-Field Self-Repair

    Gian MAYUGA  Yuta YAMATO  Tomokazu YONEDA  Yasuo SATO  Michiko INOUE  

     
    PAPER-Dependable Computing

      Pubricized:
    2016/06/27
      Vol:
    E99-D No:10
      Page(s):
    2591-2599

    Embedded memory is extensively being used in SoCs, and is rapidly growing in size and density. It contributes to SoCs to have greater features, but at the expense of taking up the most area. Due to continuous scaling of nanoscale device technology, large area size memory introduces aging-induced faults and soft errors, which affects reliability. In-field test and repair, as well as ECC, can be used to maintain reliability, and recently, these methods are used together to form a combined approach, wherein uncorrectable words are repaired, while correctable words are left to the ECC. In this paper, we propose a novel in-field repair strategy that repairs uncorrectable words, and possibly correctable words, for an ECC-based memory architecture. It executes an adaptive reconfiguration method that ensures 'fresh' memory words are always used until spare words run out. Experimental results demonstrate that our strategy enhances reliability, and the area overhead contribution is small.

  • The Reliability Analysis of the 1-out-of-2 System in Which Two Modules Do Mutual Cooperation in Recovery Mode

    Aromhack SAYSANASONGKHAM  Satoshi FUKUMOTO  

     
    LETTER-Reliability, Maintainability and Safety Analysis

      Vol:
    E99-A No:9
      Page(s):
    1730-1734

    In this research, we investigated the reliability of a 1-out-of-2 system with two-stage repair comprising hardware restoration and data reconstruction modes. Hardware restoration is normally independently executed by two modules. In contrast, we assumed that one of the modules could omit data reconstruction by replicating the data from the module during normal operation. In this 1-out-of-2 system, the two modules mutually cooperated in the recovery mode. As a first step, an evaluation model using Markov chains was constructed to derive a reliability measure: “unavailability in steady state.” Numerical examples confirmed that the reliability of the system was improved by the use of two cooperating modules. As the data reconstruction time increased, the gains in terms of system reliability also increased.

  • Reliability and Failure Impact Analysis of Distributed Storage Systems with Dynamic Refuging

    Hiroaki AKUTSU  Kazunori UEDA  Takeru CHIBA  Tomohiro KAWAGUCHI  Norio SHIMOZONO  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2016/06/17
      Vol:
    E99-D No:9
      Page(s):
    2259-2268

    In recent data centers, large-scale storage systems storing big data comprise thousands of large-capacity drives. Our goal is to establish a method for building highly reliable storage systems using more than a thousand low-cost large-capacity drives. Some large-scale storage systems protect data by erasure coding to prevent data loss. As the redundancy level of erasure coding is increased, the probability of data loss will decrease, but the increase in normal data write operation and additional storage for coding will be incurred. We therefore need to achieve high reliability at the lowest possible redundancy level. There are two concerns regarding reliability in large-scale storage systems: (i) as the number of drives increases, systems are more subject to multiple drive failures and (ii) distributing stripes among many drives can speed up the rebuild time but increase the risk of data loss due to multiple drive failures. If data loss occurs by multiple drive failure, it affects many users using a storage system. These concerns were not addressed in prior quantitative reliability studies based on realistic settings. In this work, we analyze the reliability of large-scale storage systems with distributed stripes, focusing on an effective rebuild method which we call Dynamic Refuging. Dynamic Refuging rebuilds failed blocks from those with the lowest redundancy and strategically selects blocks to read for repairing lost data. We modeled the dynamic change of amount of storage at each redundancy level caused by multiple drive failures, and performed reliability analysis with Monte Carlo simulation using realistic drive failure characteristics. We showed a failure impact model and a method for localizing the failure. When stripes with redundancy level 3 were sufficiently distributed and rebuilt by Dynamic Refuging, the proposed technique turned out to scale well, and the probability of data loss decreased by two orders of magnitude for systems with a thousand drives compared to normal RAID. The appropriate setting of a stripe distribution level could localize the failure.

21-40hit(282hit)