The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] cost reduction(11hit)

1-11hit
  • Reordering-Based Test Pattern Reduction Considering Critical Area-Aware Weighted Fault Coverage

    Masayuki ARAI  Kazuhiko IWASAKI  

     
    PAPER

      Vol:
    E100-A No:7
      Page(s):
    1488-1495

    Shrinking feature sizes and higher levels of integration in semiconductor device manufacturing technologies are increasingly causing the gap between defect levels estimated in the design stage and reported ones for fabricated devices. In this paper, we propose a unified weighted fault coverage approach that includes both bridge and open faults, considering the critical area as the incident rate of each fault. We then propose a test pattern reordering scheme that incorporates our weighted fault coverage with an aim to reduce test costs. Here we apply a greedy algorithm to reorder test patterns generated by the bridge and stuck-at automatic test pattern generator (ATPG), evaluating the relationship between the number of patterns and the weighted fault coverage. Experimental results show that by applying this reordering scheme, the number of test patterns was reduced, on average, by approximately 50%. Our results also indicate that relaxing coverage constraints can drastically reduce test pattern set sizes to a level comparable to traditional 100% coverage stuck-at pattern sets, while targeting the majority of bridge faults and keeping the defect level to no more than 10 defective parts per milion (DPPM) with a 99% manufacturing yield.

  • Small and Low-Cost Dual-Rate Optical Triplexer for OLT Transceivers in 10G/1G Co-existing 10G-EPON Systems

    Atsushi KANDA  Akira OHKI  Takeshi KUROSAKI  Hiroaki SANJOH  Kota ASAKA  Ryoko YOSHIMURA  Toshio ITO  Makoto NAKAMURA  Masafumi NOGAWA  Yusuke OHTOMO  Mikio YONEYAMA  

     
    PAPER

      Vol:
    E96-C No:7
      Page(s):
    996-1002

    The 10-gigabit Ethernet passive optical network (10G-EPON) is a promising candidate for the next generation of fiber-to-the-home access systems. In the symmetric 10G-EPON system, the gigabit Ethernet passive optical network (GE-PON) and 10G-EPON will have to co-exist on the same optical network. For this purpose, an optical triplexer (10G-transmitter, 1G-transmitter, and 10G/1G-receiver) for optical line terminal (OLT) transceivers in 10G/1G co-existing EPON systems has been developed. Reducing the size and cost of the optical triplexer has been one of the largest issues in the effort to deploy 10G-EPON systems for practical use. In this paper, we describe a novel small and low-cost dual-rate optical triplexer for use in 10G-EPON applications. By reducing the optical path length by means of a light collection system with a low-magnification long-focus coupling lens, we have successfully miniaturized the optical triplexer for use in 10G-EPON OLT 10-gigabit small form factor pluggable (XFP) transceivers and decreased the number of lenses. A low-cost design of sub-assemblies also contributes to cost reduction. The triplexer's performance complies with IEEE 802.3av specifications.

  • Low-Cost IP Core Test Using Tri-Template-Based Codes

    Gang ZENG  Hideo ITO  

     
    PAPER-Dependable Computing

      Vol:
    E90-D No:1
      Page(s):
    288-295

    A tri-template-based codes (TTBC) method is proposed to reduce test cost of intellectual property (IP) cores. In order to reduce test data volume (TDV), the approach utilizes three templates, i.e., all 0, all 1, and the previously applied test data, for generating the subsequent test data by flipping the inconsistent bits. The approach employs a small number of test channels I to supply a large number of internal scan chains 2I-3 such that it can achieve significant reduction in test application time (TAT). Furthermore, as a non-intrusive and automatic test pattern generation (ATPG) independent solution, the approach is suitable for IP core testing because it requires neither redesign of the core under test (CUT) nor running any additional ATPG for the encoding procedure. In addition, the decoder has low hardware overhead, and its design is independent of the CUT and the given test set. Theoretical analysis and experimental results for ISCAS 89 benchmark circuits have proven the efficiency of the proposed approach.

  • Concurrent Core Testing for SOC Using Merged Test Set and Scan Tree

    Gang ZENG  Hideo ITO  

     
    PAPER-Dependable Computing

      Vol:
    E89-D No:3
      Page(s):
    1157-1164

    A novel concurrent core test approach is proposed to reduce the test cost of SOC. Prior to test, the test sets corresponding to cores under test (CUT) are merged by using the proposed merging algorithm to obtain a minimum merged test set. During test, the proposed scan tree architecture is employed to support the concurrent core test using the merged test set. The approach achieves concurrent core test with one scan input and low hardware overhead. Moreover, the approach does not need any additional test generation, and it can be used in conjunction with general compression/decompression techniques to further reduce test cost. Experimental results for ISCAS 89 benchmarks have proven the efficiency of the proposed approach.

  • X-Tolerant Test Data Compression for SOC with Enhanced Diagnosis Capability

    Gang ZENG  Hideo ITO  

     
    PAPER-Dependable Computing

      Vol:
    E88-D No:7
      Page(s):
    1662-1670

    In this paper, a complete X-tolerant test data compression solution is proposed for system-on-a-chip (SOC) testing. The solution achieves low-cost testing by employing not only selective Huffman vertical coding (SHVC) for test stimulus compression but also MISR-based time compactor for test response compaction. Moreover, the solution is non-intrusive, since it can tolerate any number of unknown states (also called X state) in test responses such that it does not require modifying the logic of core to eliminate or block the sources of unknown states. Furthermore, the solution achieves enhanced diagnosis capability over conventional MISR. The enhanced diagnosis requires the least hardware overhead by reusing the existing masking logic and achieves significant saving in diagnostic time. Experimental results for ISCAS 89 benchmarks as well as the evaluation of hardware implementation have proven the efficiency of the proposed test solution.

  • Hybrid Pattern BIST for Low-Cost Core Testing Using Embedded FPGA Core

    Gang ZENG  Hideo ITO  

     
    PAPER-Dependable Computing

      Vol:
    E88-D No:5
      Page(s):
    984-992

    In the Reconfigurable System-On-a-Chip (RSOC), an FPGA core is embedded to improve the design flexibility of SOC. In this paper, we demonstrate that the embedded FPGA core is also feasible for use in implementing the proposed hybrid pattern Built-In Self-Test (BIST) in order to reduce the test cost of SOC. The hybrid pattern BIST, which combines Linear Feedback Shift Register (LFSR) with the proposed on-chip Deterministic Test Pattern Generator (DTPG), can achieve not only complete Fault Coverage (FC) but also minimum test sequence by applying a selective number of pseudorandom patterns. Furthermore, the hybrid pattern BIST is designed under the resource constraint of target FPGA core so that it can be implemented on any size of FPGA core and take full advantage of the target FPGA resource to reduce test cost. Moreover, the reconfigurable core-based approach has minimum hardware overhead since the FPGA core can be reconfigured as normal mission logic after testing such that it eliminates the hardware overhead of BIST logic. Experimental results for ISCAS 89 benchmarks and a platform FPGA chip have proven the efficiency of the proposed approach.

  • Cost Reduction for Highly Mobile Users with Commonly Visited Sites

    Takaaki ARAKAWA  Ken'ichi KAWANISHI  Yoshikuni ONOZATO  

     
    PAPER-Location Management

      Vol:
    E87-A No:7
      Page(s):
    1700-1711

    In this paper, we consider a location management scheme using Limited Pointer forwarding from Commonly visited sites (LPC) strategy for Personal Communication Services (PCS) networks. The Commonly Visited Site (CVS) is defined as a site in which a mobile user is found with high probability. A feature of the strategy is that it skips updating location information of the mobile user, provided that the mobile user moves within its CVSs. Such a strategy is expected to significantly reduce the location update cost. We evaluate the location management cost of the LPC scheme by employing a Continuous-Time Markov Chain (CTMC) model. We show that the LPC scheme can reduce the location management cost of a highly mobile user who is found in its CVS with high probability.

  • Accomplishment of At-Speed BISR for Embedded DRAMs

    Yoshihiro NAGURA  Yoshinori FUJIWARA  Katsuya FURUE  Ryuji OHMURA  Tatsunori KOMOIKE  Takenori OKITAKA  Tetsushi TANIZAKI  Katsumi DOSAKA  Kazutami ARIMOTO  Yukiyoshi KODA  Tetsuo TADA  

     
    PAPER-BIST

      Vol:
    E85-D No:10
      Page(s):
    1498-1505

    The increase of test time of embedded DRAMs (e-DRAM) is one of the key issues of System-on-chip (SOC) device test. This paper proposes to put the repair analysis function on chip as Built In Self Repair (BISR). BISR is performed at 166 MHz as at-speed of e-DRAM with using low cost automatic test equipment (ATE). The area of the BISR is 1.7 mm2. Using error storage table form contributes to realize small area penalty of repair analysis function. e-DRAM function test time by BISR was about 20% less than the conventional method at wafer level testing. Moreover, representative samples are produced to confirm repair analysis ability. The results show that all of the samples are actually repaired by repair information generated by BISR.

  • Comparison of Logic Operators for Use in Multiple-Valued Sum-of-Products Expressions

    Takahiro HOZUMI  Osamu KAKUSHO  Yutaka HATA  

     
    PAPER-Logic Design

      Vol:
    E82-D No:5
      Page(s):
    933-939

    This paper shows the best operators for sum-of-products expressions. We first describe conditions of functions for product and sum operations. We examine all two-variable functions and select those that meet the conditions and then evaluate the number of product terms needed in the minimum sum-of-products expressions when each combination of selected product and sum functions is used. As a result of this, we obtain three product functions and nine sum functions on three-valued logic. We show that each of three product functions can express the same functions and MODSUM function is the most suitable for reduction of product terms. Moreover, we show that similar results are obtained on four-valued logic.

  • Convergence Analysis of Processing Cost Reduction Method of NLMS Algorithm with Correlated Gaussian Data

    Kiyoshi TAKAHASHI  Noriyoshi KUROYANAGI  

     
    PAPER-Digital Signal Processing

      Vol:
    E79-A No:7
      Page(s):
    1044-1050

    Reduction of the complexity of the NLMS algorithm has recceived attention in the area of adaptive filtering. A processing cost reduction method, in which the component of the weight vector is updated when the absolute value of the sample is greater than or equal to an arbitrary threshold level, has been proposed. The convergence analysis of the processing cost reduction method with white Gaussian data has been derived. However, a convergence analysis of this method with correlated Gaussian data, which is important for an actual application, is not studied. In this paper, we derive the convergence cheracteristics of the processing cost reduction method with correlated Gaussian data. From the analytical results, it is shown that the range of the gain constant to insure convergence is independent of the correlation of input samples. Also, it is shown that the misadjustment is independent of the correlation of input samples. Moreover, it is shown that the convergence rate is a function of the threshold level and the eigenvalues of the covariance matrix of input samples as well as the gain constant.

  • Convergence Analysis of Processing Cost Reduction Method of NLMS Algorithm

    Kiyoshi TAKAHASHI  Shinsaku MORI  

     
    PAPER

      Vol:
    E77-A No:5
      Page(s):
    825-832

    Reduction of the complexity of the NLMS algorithm has received attention in the area of adaptive filtering. A processing cost reduction method, in which the component of the weight vector is updated when the absolute value of the sample is greater than or equal to the average of the absolute values of the input samples, has been proposed. The convergence analysis of the processing cost reduction method has been derived from a low-pass filter expression. However, in this analysis the effect of the weignt vector components whose adaptations are skipped is not considered in terms of the direction of the gradient estimation vector. In this paper, we use an arbitrary value instead of the average of the absolute values of the input samples as a threshold level, and we derive the convergence characteristics of the processing cost reduction method with arbitrary threshold level for zero-mean white Gaussian samples. From the analytical results, it is shown that the range of the gain constant to insure convergence and the misadjustment are independent of the threshold level. Moreover, it is shown that the convergence rate is a function of the threshold level as well as the gain constant. When the gain constant is small, the processing cost is reduced by using a large threshold level without a large degradation of the convergence rate.