The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.40

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E90-A No.12  (Publication Date:2007/12/01)

    Special Section on VLSI Design and CAD Algorithms
  • FOREWORD

    Yusuke MATSUNAGA  

     
    FOREWORD

      Page(s):
    2649-2650
  • Chip-Level Substrate Coupling Analysis with Reference Structures for Verification

    Daisuke KOSAKA  Makoto NAGATA  Yoshitaka MURASAKA  Atsushi IWATA  

     
    PAPER-Physical Design

      Page(s):
    2651-2660

    Chip-level substrate coupling analysis uses F-matrix computation with slice-and-stack execution to include highly concentrated substrate resistivity gradient. The technique that has been applied to evaluation of device-level isolation structures against substrate coupling is now developed into chip-level substrate noise analysis. A time-series divided parasitic capacitance (TSDPC) model is equivalent to a transition controllable noise source (TCNS) circuit that captures noise generation in a CMOS digital circuit. A reference structure incorporating TCNS circuits and an array of on-chip high precision substrate noise monitors provides a basis for the verification of chip-level analysis of substrate coupling in a given technology. Test chips fabricated in two different wafer processings of 0.30-µm and 0.18-µm CMOS technologies demonstrate the universal availability of the proposed analysis techniques. Substrate noise simulation achieves no more than 3 dB discrepancy in peak amplitude compared to measurements with 100-ps/100-µV resolution, enabling precise evaluation of the impacts of the distant placements of sensitive devices from sources of noise as well as application of guard ring/band structures.

  • Timing Analysis Considering Spatial Power/Ground Level Variation

    Masanori HASHIMOTO  Junji YAMAGUCHI  Hidetoshi ONODERA  

     
    PAPER-Physical Design

      Page(s):
    2661-2668

    Spatial power/ground level variation causes power/ground level mismatch between driver and receiver, and the mismatch affects gate propagation delay. This paper proposes a timing analysis method based on a concept called "PG level equalization" which is compatible with conventional STA frameworks. We equalize the power/ground levels of driver and receiver. The charging/discharging current variation due to equalization is compensated by replacing output load. We present an implementation method of the proposed concept, and demonstrate that the proposed method works well for multiple-input gates and RC load model.

  • Closed-Form Expressions for Crosstalk Noise and Worst-Case Delay on Capacitively Coupled Distributed RC Lines

    Hiroshi KAWAGUCHI  Danardono Dwi ANTONO  Takayasu SAKURAI  

     
    PAPER-Physical Design

      Page(s):
    2669-2681

    Closed-form expressions for a crosstalk noise amplitude and worst-case delay in capacitively coupled two-line and three-line systems are derived assuming bus lines and other signal lines in a VLSI. Two modes are studied; a case that adjacent lines are driven from the same direction, and the other case that adjacent lines are driven from the opposite direction. Beside, a junction capacitance of a driver MOSFET is considered. The closed-form expressions are useful for circuit designers in an early stage of a VLSI design to give insight to interconnection problems. The expressions are extensively compared and fitted to SPICE simulations. The relative and absolute errors in the crosstalk noise amplitude are within 63.8% and 0.098 E (where E is a supply voltage), respectively. The relative error in the worst-case delay is less than 8.1%.

  • Manufacturability-Aware Design of Standard Cells

    Hirokazu MUTA  Hidetoshi ONODERA  

     
    PAPER-Physical Design

      Page(s):
    2682-2690

    We focus our attention on the layout dependent Across Chip Linewidth Variability (ACLV) of gate-forming poly-silicon patterns as a measure for manufacturability, which is a major contributor of systematic gate-length variation. First, we study the ACLV of standard cell layouts by lithography simulation. Then, we introduce regularity in gate-forming poly-silicon patterns and how it improves the ACLV and also how it incurs area-overhead. According to the investigation, we propose two design guidelines for standard-cell layout that can reduce ACLV with reasonable area overhead. Those guidelines include on-grid fixed-pitch layout with dummy-poly insertion and stretched gate-poly extension. Design experiments assuming a 65 nm process technology indicate that a D-FF designed with the first guideline reduces ACLV by 35% with 14% area overhead and the second guideline reduces ACLV by 75% with 29% area overhead at the best focus condition. Under defocus conditions, both layouts exhibit stable characteristics whereas the variability of conventional layout grows rapidly as the level of defocus increases. Circuit-level lithography simulation over benchmark circuits also supports that the proposed guidelines considerably reduces the amount of gate length variation.

  • Look-Ahead Dynamic Threshold Voltage Control Scheme for Improving Write Margin of SOI-7T-SRAM

    Masaaki IIJIMA  Kayoko SETO  Masahiro NUMA  Akira TADA  Takashi IPPOSHI  

     
    LETTER-Memory Design and Test

      Page(s):
    2691-2694

    Instability of SRAM memory cells derived from aggressive technology scaling has been recently one of the most significant issues. Although a 7T-SRAM cell with an area-tolerable separated read port improves read margins even at sub-1V, it unfortunately results in degradation of write margins. In order to assist the write operation, we address a new memory cell employing a look-ahead body-bias which dynamically controls the threshold voltage. Simulation results have shown improvement in both the write margins and access time without increasing the leakage power derived from the body-bias.

  • Area Comparison between 6T and 8T SRAM Cells in Dual-Vdd Scheme and DVS Scheme

    Yasuhiro MORITA  Hidehiro FUJIWARA  Hiroki NOGUCHI  Yusuke IGUCHI  Koji NII  Hiroshi KAWAGUCHI  Masahiko YOSHIMOTO  

     
    PAPER-Memory Design and Test

      Page(s):
    2695-2702

    This paper compares areas between a 6T and 8T SRAM cells, in a dual-Vdd scheme and a dynamic voltage scaling (DVS) scheme. In the dual-Vdd scheme, we predict that the area of the 6T cell keep smaller than that of the 8T cell, over feature technology nodes all down to 32 nm. In contrast, in the DVS scheme, the 8T cell will becomes superior to the 6T cell after the 32-nm node, in terms of the area.

  • An Efficient Diagnosis Scheme for RAMs with Simple Functional Faults

    Jin-Fu LI  Chao-Da HUANG  

     
    PAPER-Memory Design and Test

      Page(s):
    2703-2711

    This paper presents an efficient diagnosis scheme for RAMs. Three March-based algorithms are proposed to diagnose simple functional faults of RAMs. A March-15N algorithm is used for locating and partially diagnosing faults of bit-oriented or word-oriented memories, where N represents the address number. Then a 3N March-like algorithm is used for locating the aggressor words (bits) of coupling faults (CFs) in word-oriented (bit-oriented) memories. It also can distinguish the faults which cannot be identified by the March-15N algorithm. Thus, the proposed diagnosis scheme can achieve full diagnosis and locate aggressors with (15N + 3mN) Read/Write operations for a bit-oriented RAM with m CFs. For word-oriented RAMs, a March-like algorithm is also proposed to locate the aggressor bit in the aggressor word with 4 log2B Read/Write operations, where B is the word width. Analysis results show that the proposed diagnosis scheme has higher diagnostic resolution and lower time complexity than the previous fault location and fault diagnosis approaches. A programmable built-in self-diagnosis (BISD) design is also implemented to perform the proposed diagnosis algorithms. Experimental results show that the area overhead of the BISD is small--only about 2.17% and 0.42% for 16 K8-bit and 16 K128-bit SRAMs, respectively.

  • Transistor Sizing of LCD Driver Circuit for Technology Migration

    Masanori HASHIMOTO  Takahito IJICHI  Shingo TAKAHASHI  Shuji TSUKIYAMA  Isao SHIRAKAWA  

     
    LETTER-Circuit Synthesis

      Page(s):
    2712-2717

    Design automation of LCD driver circuits is not sophisticatedly established. Display fineness of an LCD panel depends on a performance metric, ratio of pixel voltage to video voltage (RPV). However, there are several other important metrics, such as area, and the best circuit cannot be decided uniquely. This paper proposes a design automation technique for a LCD column driver to provide several circuit design results with different performance so that designers can select an appropriate design among them. The proposed technique is evaluated with an actual design data, and experimental results show that the proposed method successfully performs technology migration by transistor sizing. Also, the proposed technique is experimentally verified from points of solution quality and computational time.

  • A Fast Probability-Based Algorithm for Leakage Current Reduction Considering Controller Cost

    Tsung-Yi WU  Jr-Luen TZENG  

     
    PAPER-Circuit Synthesis

      Page(s):
    2718-2726

    Because the leakage current of a digital circuit depends on the states of the circuit's logic gates, assigning a minimum leakage vector (MLV) for the primary inputs and the flip-flops' outputs of the circuit that operates in the sleep mode is a popular technique for leakage current reduction. In this paper, we propose a novel probability-based algorithm and technique that can rapidly find an MLV. Unlike most traditional techniques that ignore the leakage current overhead of the newborn vector controller, our technique can take this overhead into account. Ignoring this overhead during solution space exploration may bring a side effect that is misrecognizing a non-optimal solution as an optimal one. Experimental results show that our heuristic algorithm can reduce the leakage current up to 59.5% and can find the optimal solutions on most of the small MCNC benchmark circuits. Moreover, the required CPU time of our probability-based program is significantly less than that of a random search program.

  • Opposite-Phase Clock Tree for Peak Current Reduction

    Yow-Tyng NIEH  Shih-Hsu HUANG  Sheng-Yu HSU  

     
    PAPER-Circuit Synthesis

      Page(s):
    2727-2735

    Although much research effort has been devoted to the minimization of total power consumption caused by the clock tree, no attention has been paid to the minimization of the peak current caused by it. In this paper, we propose an opposite-phase clock scheme to reduce the peak current incurred by the clock tree. Our basic idea is to balance the charging and discharging activities. According to the output operation, the clock buffers that transit simultaneously are divided into two groups: half of the clock buffers transit at the same phase of the clock source, while the other half transit at the opposite phase of the clock source. As a consequence, the opposite-phase clock scheme significantly reduces the peak current caused by the clock tree. Experimental data show that our approach can be applied at different design stages in the existing design flow.

  • Low Area Pipelined Circuits by the Replacement of Registers with Delay Elements

    Bakhtiar Affendi ROSDI  Atsushi TAKAHASHI  

     
    PAPER-Circuit Synthesis

      Page(s):
    2736-2742

    A new algorithm is proposed to reduce the area of a pipelined circuit using a combination of multi-clock cycle paths, clock scheduling and delay balancing. The algorithm analyzes the circuit and replaces intermediate registers with delay elements under the condition that the circuit works correctly at given target clock-period range with the smaller area. Experiments with pipelined multipliers verify that the proposed algorithm can reduce the area of a pipelined circuit without degrading performance.

  • A Relocation Method for Circuit Modifications

    Kunihiko YANAGIBASHI  Yasuhiro TAKASHIMA  Yuichi NAKAMURA  

     
    PAPER-Circuit Synthesis

      Page(s):
    2743-2751

    In this paper, we propose a novel migration method. In this method, the resultant placement retains the structure of the original placement, called model placement, as much as possible. For this purpose, we minimize the sum of the difference in area between the model placement and the relocated one and the total amount of displacement between them. Moreover, to achieve a short runtime, we limit the solution space and change the packing origin in the optimization process. We construct the system on Sequence-Pair. Experimental results show that our approach preserves the chip area and the overall circuit structure with 98% less runtime than that realized by naive simulated annealing.

  • Design Method for Numerical Function Generators Using Recursive Segmentation and EVBDDs

    Shinobu NAGAYAMA  Tsutomu SASAO  Jon T. BUTLER  

     
    PAPER-Logic Synthesis and Verification

      Page(s):
    2752-2761

    Numerical function generators (NFGs) realize arithmetic functions, such as ex,sin(πx), and , in hardware. They are used in applications where high-speed is essential, such as in digital signal or graphics applications. We introduce the edge-valued binary decision diagram (EVBDD) as a means of reducing the delay and memory requirements in NFGs. We also introduce a recursive segmentation algorithm, which divides the domain of the function to be realized into segments, where the given function is realized as a polynomial. This design reduces the size of the multiplier needed and thus reduces delay. It is also shown that an adder can be replaced by a set of 2-input AND gates, further reducing delay. We compare our results to NFGs designed with multi-terminal BDDs (MTBDDs). We show that EVBDDs yield a design that has, on the average, only 39% of the memory and 58% of the delay of NFGs designed using MTBDDs.

  • BDD Representation for Incompletely Specified Multiple-Output Logic Functions and Its Applications to the Design of LUT Cascades

    Munehiro MATSUURA  Tsutomu SASAO  

     
    PAPER-Logic Synthesis and Verification

      Page(s):
    2762-2769

    A multiple-output function can be represented by a binary decision diagram for characteristic function (BDD_for_CF). This paper presents a method to represent multiple-output incompletely specified functions using BDD_for_CFs. An algorithm to reduce the widths of BDD_for_CFs is presented. This method is useful for decomposition of incompletely specified multiple-output functions. Experimental results for radix converters, adders, a multiplier, and lists of English words show that this method is useful for the synthesis of LUT cascades. An implementation of English words list by LUT cascades and an auxiliary memory is also shown.

  • Timing-Constrained Area Minimization Algorithm for Parallel Prefix Adders

    Taeko MATSUNAGA  Yusuke MATSUNAGA  

     
    PAPER-Logic Synthesis and Verification

      Page(s):
    2770-2777

    This paper addresses parallel prefix adder synthesis which targets area minimization under given bitwise timing constraints. This problem is treated as a problem to synthesize prefix graphs which represent global structures of parallel prefix adders at technology-independent level, and a two-folded algorithm to minimize area of prefix graphs is proposed. The first process is dynamic programming based area minimization (DPAM), which focuses on a specific subset of prefix graphs and finds an exact minimum solution for the subset by dynamic programming. The subset is defined by imposing some restrictions on structures of prefix graphs. By utilizing these restrictions, DPAM can find the minimum solutions efficiently for practical bit width. The second process is area reduction with re-structuring (ARRS), which removes the imposed restrictions on structures, and restructures the result of DPAM for further area reduction while satisfying timing constraints. Experimental results show that smaller area can be achieved compared to existing methods both at prefix graph level and at gate level.

  • Satisfiability Checking for Logic with Equality and Uninterpreted Functions under Equivalence Constraints

    Hiroaki KOZAWA  Kiyoharu HAMAGUCHI  Toshinobu KASHIWABARA  

     
    PAPER-Logic Synthesis and Verification

      Page(s):
    2778-2789

    For formal verification of large-scale digital circuits, a method using satisfiability checking of logic with equality and uninterpreted functions has been proposed. This logic, however, does not consider specific properties of functions or predicates at all, e.g. associative property of addition. In order to ease this problem, we introduce "equivalence constraint" that is a set of formulas representing the properties of functions and predicates, and check the satisfiability of formulas under the constraint. In this report, we show an algorithm for checking satisfiability with equivalence constraint and also experimental results.

  • Scheduling Methods for Asynchronous Circuits with Bundled-Data Implementations Based on the Approximation of Start Times

    Hiroshi SAITO  Naohiro HAMADA  Nattha JINDAPETCH  Tomohiro YONEDA  Chris MYERS  Takashi NANYA  

     
    PAPER-System Level Design

      Page(s):
    2790-2799

    This paper proposes new scheduling methods for asynchronous circuits with bundled-data implementations. Since operations in asynchronous circuits start after the completion of a previous operation, this method approximates the set of start times for each operation using the delay of the resources. Next, this method decides on control steps from the approximated sets of start times, which are used in scheduling algorithms. This paper extends two scheduling algorithms used for synchronous circuits so that the approximated sets of start times and the decided control steps are used. Finally, this paper shows the effectiveness of our proposed methods by comparing scheduling results with ones obtained by the original two scheduling algorithms.

  • Generation of Pack Instruction Sequence for Media Processors Using Multi-Valued Decision Diagram

    Hiroaki TANAKA  Yoshinori TAKEUCHI  Keishi SAKANUSHI  Masaharu IMAI  Hiroki TAGAWA  Yutaka OTA  Nobu MATSUMOTO  

     
    PAPER-System Level Design

      Page(s):
    2800-2809

    SIMD instructions are often implemented in modern multimedia oriented processors. Although SIMD instructions are useful for many digital signal processing applications, most compilers do not exploit SIMD instructions. The difficulty in the utilization of SIMD instructions stems from data parallelism in registers. In assembly code generation, the positions of data in registers must be noted. A technique of generating pack instructions which pack or reorder data in registers is essential for exploitation of SIMD instructions. This paper presents a code generation technique for SIMD instructions with pack instructions. SIMD instructions are generated by finding and grouping the same operations in programs. After the SIMD instruction generation, pack instructions are generated. In the pack instruction generation, Multi-valued Decision Diagram (MDD) is introduced to represent and to manipulate sets of packed data. Experimental results show that the proposed code generation technique can generate assembly code with SIMD and pack instructions performing repacking of 8 packed data in registers for a RISC processor with a dual-issue coprocessor which supports SIMD and pack instructions. The proposed method achieved speedup ratio up to about 8.5 by SIMD instructions and multiple-issue mechanism of the target processor.

  • Efficient Memory Utilization for High-Speed FPGA-Based Hardware Emulators with SDRAMs

    Kohei HOSOKAWA  Katsunori TANAKA  Yuichi NAKAMURA  

     
    PAPER-System Level Design

      Page(s):
    2810-2817

    FPGA-based hardware emulators are often used for the verification of LSI functions. They generally have dedicated external memories, such as SDRAMs, to compensate for the lack of memory capacity in FPGAs. In such a case, access between the FPGAs and the dedicated external memory may represent a major bottleneck with respect to emulation speed since the dedicated external memory may have to emulate a large number of memory blocks. In this paper, we propose three methods, "Dynamic Clock Control (DCC)," "Memory Mapping Optimization (MMO)," and "Efficient Access Scheduling (EAS)," to avoid this bottleneck. DCC controls an emulation clock dynamically in accord with the number of memory accesses within one emulation clock cycle. EAS optimizes the ordering of memory access to the dedicated external memory, and MMO optimizes the arrangement of the dedicated external memory addresses to which respective memories will be emulated. With them, emulation speed can be made 29.0 times faster, as evaluated in actual LSI emulations.

  • High-Efficiency VLSI Architecture Design for Motion-Estimation in H.264/AVC

    Chun-Lung HSU  Mean-Hom HO  

     
    PAPER-System Level Design

      Page(s):
    2818-2825

    This paper proposes a flexible VLSI architecture design for motion estimation in H.264/AVC using a high-efficiency variable block-size decision (VBSD) approach. The proposed VBSD approach can perform a full motion search on integrating the 44 block sizes into 48, 84, 88, 816, 168, or 1616 block sizes and then appropriately select the optimal modes for motion compensation operating. In other words, the proposed architecture based on the VBSD approach can effectively reduce the encoding time of the motion estimation by dealing with different block sizes under 1616 searching range. Using the TSMC 0.18-µm CMOS technology, the proposed architecture has been successfully realized. Simulation and verification results show that the proposed architecture has significant bit-rate reduction and small PSNR degradation. Also, the physical chip design revealed that the maximum frame rate of this work can process 704 fps with QCIF (176144), 176 fps with CIF (352288) and 44 fps with 4CIF (704576) video resolutions under lower gate counts and higher working frequency.

  • Regular Section
  • An Approach to Solve Local Minimum Problem in Sound Source and Microphone Localization

    Kazunori KOBAYASHI  Ken'ichi FURUYA  Yoichi HANEDA  Akitoshi KATAOKA  

     
    PAPER-Engineering Acoustics

      Page(s):
    2826-2834

    We previously proposed a method of sound source and microphone localization. The method estimates the locations of sound sources and microphones from only time differences of arrival between signals picked up by microphones even if all their locations are unknown. However, there is a problem that some estimation results converge to local minimum solutions because this method estimates locations iteratively and the error function has multiple minima. In this paper, we present a new iterative method to solve the local minimum problem. This method achieves accurate estimation by selecting effective initial locations from many random initial locations. The computer simulation and experimental results demonstrate that the presented method eliminates most local minimum solutions. Furthermore, the computational complexity of the presented method is similar to that of the previous method.

  • A Distortion-Free Learning Algorithm for Feedforward Multi-Channel Blind Source Separation

    Akihide HORITA  Kenji NAKAYAMA  Akihiro HIRANO  

     
    PAPER-Digital Signal Processing

      Page(s):
    2835-2845

    FeedForward (FF-) Blind Source Separation (BSS) systems have some degree of freedom in the solution space. Therefore, signal distortion is likely to occur. First, a criterion for the signal distortion is discussed. Properties of conventional methods proposed to suppress the signal distortion are analyzed. Next, a general condition for complete separation and distortion-free is derived for multi-channel FF-BSS systems. This condition is incorporated in learning algorithms as a distortion-free constraint. Computer simulations using speech signals and stationary colored signals are performed for the conventional methods and for the new learning algorithms employing the proposed distortion-free constraint. The proposed method can well suppress signal distortion, while maintaining a high source separation performance.

  • An Algorithm to Improve the Performance of M-Channel Time-Interleaved A-D Converters

    Koji ASAMI  

     
    PAPER-Analog Signal Processing

      Page(s):
    2846-2852

    One method for achieving high-speed waveform digitizing uses time-interleaved A-D Converters (ADCs). It is known that, in this method, using multiple ADCs enables sampling at a rate higher than the sampling rate of the ADC being used. Degradation of the dynamic range, however, results from such factors as phase error in the sampling clock applied to the ADC, and mismatched frequency characteristics among the individual ADCs. This paper describes a method for correcting these mismatches using a digital signal processing (DSP) technique. This method can be applied to any number of interleaved ADCs, and it does not require any additional hardware; good correction and improved accuracy can be obtained simply by adding a little to the computing overhead.

  • Function-Level Partitioning of Sequential Programs for Efficient Behavioral Synthesis

    Yuko HARA  Hiroyuki TOMIYAMA  Shinya HONDA  Hiroaki TAKADA  Katsuya ISHII  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    2853-2862

    This paper proposes a behavioral level partitioning method for efficient behavioral synthesis from a large sequential program consisting of a set of functions. Our method optimally determines functions to be inlined into the main module and the other functions to be synthesized into sub modules in such a way that the overall datapath is minimized while the complexity of individual modules is lower than a certain level. The partitioning problem is formulated as an integer programming problem. Experimental results show the effectiveness of the proposed method.

  • `Adaptive Link Adjustment' Applied to the Fixed Charge Transportation Problem

    Sang-Moon SOAK  

     
    PAPER-Numerical Analysis and Optimization

      Page(s):
    2863-2876

    The fixed charge transportation problem (FCTP) is a classic challenge for combinatorial optimization; it is based on the well-known transportation problem (TP), and is one of the prime examples of an NP-complete variant of the TP, of general importance in a wide range of transportation network design problems. Many techniques have been applied to this problem, and the most effective so far (in terms of near-optimal results in reasonable time on large instances) are evolutionary algorithm based approaches. In particular, an EA proposed by Eckert and Gottlieb has produced the best performance so far on a set of specific benchmark instances. We introduce a new scheme, which has more general applicability, but which we test here on the FCTP. The proposed scheme applies an adaptive mutation process immediately following the evaluation of a phenotype. It thereby adapts automatically to learned information encoded in the chromosome. The underlying encoding approach is to encode an ordering of elements for interpretation by a constructive algorithm (such as with the Link and Node Biased encoding for spanning trees, and the Random Keys encoding which has been applied to both scheduling and graph problems), however the main adaptive process rewards links in such a way that genes effectively encode a measure of the number of times their associated link has appeared in selected solutions. Tests are done which compare our approach with Eckert and Gottlieb's results on benchmark FCTP instances, and other approaches.

  • Kalman-Filter Based Estimation of Electric Load Composition with Non-ideal Transformer Modeling

    Soon LEE  Seung-Mook BAEK  Jung-Wook PARK  Young-Hyun MOON  

     
    PAPER-Numerical Analysis and Optimization

      Page(s):
    2877-2883

    This paper presents a study to estimate the composition of an electric load, i.e. to determine the amount of each load class by the direct measurements of the total electric current waveform from instrument reading. Kalman filter algorithm is applied to estimate the electric load composition on a consumer side of a distributed power system. The electric load supplied from the different voltage level by using a non-ideal delta-wye transformer is also studied with consideration of the practical environment for a distributed power system.

  • A Parallel Algorithm for NMNF Problems with a Large Number of Capacity Constraints

    Shieh-Shing LIN  

     
    PAPER-Numerical Analysis and Optimization

      Page(s):
    2884-2890

    In this paper, we propose a converting technique based method to solve nonlinear multi-commodity network flow (NMNF) problems with a large number of capacity constraints and discuss the associated implementation. We have combined this method with a successive quadratic programming (SQP) method and a parallel dual-type (PDt) method possessing decomposition effects. We have tested our method in solving a kind of lattice-type network system examples of NMNF problems. The simulation results show that the proposed algorithm is efficient for solving NMNF problems and successfully handles a large number of coupling capacity constraints. Furthermore, the computational efficiency of the proposed algorithm is more significant while the numbers of capacity constraints are increased.

  • Discrete Program-Size Dependent Software Reliability Assessment: Modeling, Estimation, and Goodness-of-Fit Comparisons

    Shinji INOUE  Shigeru YAMADA  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Page(s):
    2891-2902

    In this paper we propose a discrete program-size dependent software reliability growth model flexibly describing the software failure-occurrence phenomenon based on a discrete Weibull distribution. We also conduct model comparisons of our discrete SRGM with existing discrete SRGMs by using actual data sets. The program size is one of the important metrics of software complexity. It is known that flexible discrete software reliability growth modeling is difficult due to the mathematical manipulation under a conventional modeling-framework in which the time-dependent behavior of the cumulative number of detected faults is formulated by a difference equation. Our discrete SRGM is developed under an existing unified modeling-framework based on the concept of general order-statistics, and can incorporate the effect of the program size into software reliability assessment. Further, we discuss the method of parameter estimation, and derive software reliability assessment measures of our discrete SRGM. Finally, we show numerical examples of discrete software reliability analysis based on our discrete SRGM by using actual data.

  • The Vanstone-Zuccherato Schemes Revisited

    Naoki KANAYAMA  Shigenori UCHIYAMA  

     
    PAPER-Information Security

      Page(s):
    2903-2907

    In 1995, Vanstone and Zuccherato proposed a novel method of generating RSA moduli having a predetermined set of bits which are the ASCII representation of user's identification information (i.e., name, email address, etc.). This could lead to a savings in bandwidth for data transmission and storage. In this paper, we apply this idea of Vanstone and Zuccherato for reducing the storage requirement of RSA public moduli to integer factoring based public-key schemes with their moduli of the form prq. More precisely, we explicitly propose two efficient methods for specifying high-order bits of prime factors of their public-keys. We also consider the security of the proposed methods.

  • Improved MACs from Differentially-Uniform Permutations

    Kazuhiko MINEMATSU  Toshiyasu MATSUSHIMA  

     
    PAPER-Information Security

      Page(s):
    2908-2915

    This paper presents MACs that combine a block cipher and its component such as a reduced-round version. Our MACs are faster than the standard MAC modes such as CBC-MAC, and provably secure if the block cipher is pseudorandom and its component is a permutation with a small differential probability. Such a MAC scheme was recently proposed by one of authors, and we provide improvements about security and treading-off between speed and amount of preprocessing.

  • Group-Linking Method: A Unified Benchmark for Machine Learning with Recurrent Neural Network

    Tsungnan LIN  C. Lee GILES  

     
    PAPER-Neural Networks and Bioengineering

      Page(s):
    2916-2929

    This paper proposes a method (Group-Linking Method) that has control over the complexity of the sequential function to construct Finite Memory Machines with minimal order--the machines have the largest number of states based on their memory taps. Finding a machine with maximum number of states is a nontrivial problem because the total number of machines with memory order k is (256)2k-2, a pretty large number. Based on the analysis of Group-Linking Method, it is shown that the amount of data necessary to reconstruct an FMM is the set of strings not longer than the depth of the machine plus one, which is significantly less than that required for traditional greedy-based machine learning algorithm. Group-Linking Method provides a useful systematic way of generating unified benchmarks to evaluate the capability of machine learning techniques. One example is to test the learning capability of recurrent neural networks. The problem of encoding finite state machines with recurrent neural networks has been extensively explored. However, the great representation power of those networks does not guarantee the solution in terms of learning exists. Previous learning benchmarks are shown to be not rich enough structurally in term of solutions in weight space. This set of benchmarks with great expressive power can serve as a convenient framework in which to study the learning and computation capabilities of various network models. A fundamental understanding of the capabilities of these networks will allow users to be able to select the most appropriate model for a given application.

  • An Improved Clonal Selection Algorithm and Its Application to Traveling Salesman Problems

    Shangce GAO  Zheng TANG  Hongwei DAI  Jianchen ZHANG  

     
    PAPER-Neural Networks and Bioengineering

      Page(s):
    2930-2938

    The clonal selection algorithm (CS), inspired by the basic features of adaptive immune response to antigenic stimulus, can exploit and explore the solution space parallelly and effectively. However, antibody initialization and premature convergence are two problems of CS. To overcome these two problems, we propose a chaotic distance-based clonal selection algorithm (CDCS). In this novel algorithm, we introduce a chaotic initialization mechanism and a distance-based somatic hypermutation to improve the performance of CS. The proposed algorithm is also verified for numerous benchmark traveling salesman problems. Experimental results show that the improved algorithm proposed in this paper provides better performance when compared to other metaheuristics.

  • Robust Source Separation with Simple One-Source-Active Detection

    Yijing CHU  Heping DING  Xiaojun QIU  

     
    LETTER-Engineering Acoustics

      Page(s):
    2939-2944

    Assuming there are short time periods in which only one source is active, a new approach for source separation is proposed. An affine projection adaptation algorithm with a non-orthogonal constraint shows excellent noise immunity, a high convergence rate, and good tracking capability to efficiently obtain a solution to the separation filters.

  • A Class of Cocyclic Quasi Jacket Block Matrix

    Moon Ho LEE  Subash Shree POKHREL  Wen Ping MA  

     
    LETTER-Digital Signal Processing

      Page(s):
    2945-2948

    In this letter, we present quasi-Jacket block matrices over GF(2), i.e., binary matrices which all are belong to a class of cocyclic matrices. These matrices are may be useful in digital signal processing, CDMA, and coded modulation. Based on Circular Permutation Matrix (CPM) cocyclic quasi-Jacket block low-density matrix is introduced in this letter which is useful in coding theory. Additionally, we show that the fast algorithm of quasi-Jacket block matrix.

  • A Digital Image Watermarking Method Using Interval Arithmetic

    Teruya MINAMOTO  Mitsuaki YOSHIHARA  Satoshi FUJII  

     
    LETTER-Digital Signal Processing

      Page(s):
    2949-2951

    In this letter, we propose a new digital image watermarking method using interval arithmetic. This is a new application of interval arithmetic. Experimental results show that the proposed method gives a watermarked image of better quality and is robust against some attacks.

  • Fast Parameter Selection Algorithm for Linear Parametric Filters

    Akira TANAKA  Masaaki MIYAKOSHI  

     
    LETTER-Digital Signal Processing

      Page(s):
    2952-2956

    A parametric linear filter for a linear observation model usually requires a parameter selection process so that the filter achieves a better filtering performance. Generally, criteria for the parameter selection need not only the filtered solution but also the filter itself with each candidate of the parameter. Obtaining the filter usually costs a large amount of calculations. Thus, an efficient algorithm for the parameter selection is required. In this paper, we propose a fast parameter selection algorithm for linear parametric filters that utilizes a joint diagonalization of two non-negative definite Hermitian matrices.

  • Covariance Control for Bilinear Stochastic Systems via Sliding Mode Control Concept

    Koan-Yuh CHANG  Tsung-Lin CHENG  

     
    LETTER-Systems and Control

      Page(s):
    2957-2961

    Based on the concept of sliding mode control, we study the problem of steady state covariance assignment for bilinear stochastic systems. We find that the invariance property of sliding mode control ensures nullity of the matched bilinear term in the system on the sliding mode. By suitably using Ito calculus, the controller u(t) can be designed to force the feedback gain matrix G to achieve the goal of steady state covariance assignment. We also compare our method with other approaches via simulations.

  • Security Analysis of Zhu-Bao's Verifiably Committed Signature

    Dae Hyun YUM  Pil Joong LEE  

     
    LETTER-Information Security

      Page(s):
    2962-2964

    A fair exchange scheme is a protocol by which two parties Alice and Bob swap items or services without allowing either party to gain an advantage by quitting prematurely or otherwise misbehaving. Verifiably committed signature is a generalized and unified model for non-interactive optimistic fair exchange scheme. The state-of-the-art verifiably committed signature that enjoys the off-line, setup-free and stand-alone properties is due to Zhu and Bao [1]. In this article, we show that the Zhu-Bao's verifiably committed signature is insecure in the multi-user setting and then consider possible countermeasures.

  • A Note on the ε-Overflow Probability of Lossless Codes

    Ryo NOMURA  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    LETTER-Information Theory

      Page(s):
    2965-2970

    In this letter, we generalize the achievability of variable-length coding from two viewpoints. One is the definition of an overflow probability, and the other is the definition of an achievability. We define the overflow probability as the probability of codeword length, not per symbol, is larger than ηn and we introduce the ε-achievability of variable-length codes that implies an existence of a code for the source under the condition that the overflow probability is smaller than or equal to ε. Then we show that the ε-achievability of variable-length codes is essentially equivalent to the ε-achievability of fixed-length codes for general sources. Moreover by using above results, we show the condition of ε-achievability for some restricted sources given ε.

  • An Effective SLM-PRSC Hybrid Scheme for OFDM PAPR Reduction

    Seungwoo HAN  Suckchel YANG  Yoan SHIN  

     
    LETTER-Communication Theory and Signals

      Page(s):
    2971-2974

    In order to improve OFDM (Orthogonal Frequency Division Multiplexing) PAPR (Peak-to-Average Power Ratio) reduction performance of the conventional SLM (SeLective Mapping), we propose an effective SLM-PRSC (PAPR Reduction Sub-Carrier) hybrid scheme. In the proposed scheme, after performing the SLM for the frequency domain OFDM symbol excluding pre-determined PRSC positions, the SLM-PRSC hybrid sequence with the lowest PAPR, which is generated by adding the time domain PRSC sequence to the results of the SLM, is selected as the transmitted OFDM signal. Since the identical PRSC sequences generated a priori are repeatedly used for every OFDM symbol, excessive IFFT (Inverse Fast Fourier Transform) calculation is avoided. Simulation results reveal that the proposed scheme significantly improves the PAPR reduction performance of the conventional SLM, while avoiding excessive increase of IFFT and PAPR calculation.

  • Performance Bound for Finite-Length LDPC Coded Modulation

    Huy G. VU  Ha H. NGUYEN  David E. DODDS  

     
    LETTER-Communication Theory and Signals

      Page(s):
    2975-2978

    Coded modulation systems based on low density parity check (LDPC) codes of finite lengths are considered. The union bounds on the bit error probabilities of the maximum likelihood (ML) decoding are presented for both additive white Gaussian noise (AWGN) and flat Rayleigh fading channels. The tightness of the derived bound is verified by simulating the ML decoding of a very short LDPC code. For medium-length codes, performance of the sum-product decoding can asymptotically approach the bounds.

  • A Hybrid Image Coder Based on SPIHT Algorithm with Embedded Block Coding

    Tze-Yun SUNG  Hsi-Chin HSIN  

     
    LETTER-Image

      Page(s):
    2979-2984

    Embedded zero-tree coding in wavelet domain has drawn a lot of attention for image compression applications. Among noteworthy zero-tree algorithms is the set partitioning in hierarchical trees (SPIHT) algorithm. For images with textures, high frequency wavelet coefficients are likely to become significant after a few scan passes of SPIHT, and therefore the coding results are often insufficient. It is desirable that the low frequency and high frequency components of an image are coded using different strategies. In this paper, we propose a hybrid algorithm using the SPIHT and EBC (embedded block coding) to code low frequency and high frequency wavelet coefficients, respectively; the intermediate coding results of low frequency coefficients are used to facilitate the coding operation of high frequency coefficients. Experimental results show that the coding performance can be significantly improved by the hybrid SPIHT-EBC algorithm.

  • State Machines as Inductive Types

    Kazuhiro OGATA  Kokichi FUTATSUGI  

     
    LETTER-Concurrent Systems

      Page(s):
    2985-2988

    We describe a way to write state machines inductively. The proposed method makes it possible to use the standard techniques for proving theorems on inductive types to verify that state machines satisfy invariant properties. A mutual exclusion protocol using a queue is used to exemplify the proposed method.