The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.48

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E92-A No.7  (Publication Date:2009/07/01)

    Special Section on Recent Advances in Technologies for Assessing System Reliability
  • FOREWORD

    Shigeru YANAGI  

     
    FOREWORD

      Page(s):
    1557-1557
  • The Consistency of the Pandemic Simulations between the SEIR Model and the MAS Model

    Yuki TOYOSAKA  Hideo HIROSE  

     
    PAPER

      Page(s):
    1558-1562

    There are two main methods for pandemic simulations: the SEIR model and the MAS model. The SEIR model can deal with simulations quickly for many homogeneous populations with simple ordinary differential equations; however, the model cannot accommodate many detailed conditions. The MAS model, the multi-agent simulation, can deal with detailed simulations under the many kinds of initial and boundary conditions with simple social network models. However, the computing cost will grow exponentially as the population size becomes larger. Thus, simulations in the large-scale model would hardly be realized unless supercomputers are available. By combining these two methods, we may perform the pandemic simulations in the large-scale model with lower costs. That is, the MAS model is used in the early stage of a pandemic simulation to determine the appropriate parameters to be used in the SEIR model. With these obtained parameters, the SEIR model may then be used. To investigate the validity of this combined method, we first compare the simulation results between the SEIR model and the MAS model. Simulation results of the MAS model and the SEIR model that uses the parameters obtained by the MAS model simulation are found to be close to each other.

  • Performability Modeling for Software System with Performance Degradation and Reliability Growth

    Koichi TOKUNO  Shigeru YAMADA  

     
    PAPER

      Page(s):
    1563-1571

    In this paper, we discuss software performability evaluation considering the real-time property; this is defined as the attribute that the system can complete the task within the stipulated response time limit. We assume that the software system has two operational states from the viewpoint of the end users: one is operating with the desirable performance level according to specification and the other is with degraded performance level. The dynamic software reliability growth process with performance degradation is described by the extended Markovian software reliability model with imperfect debugging. Assuming that the software system can process the multiple tasks simultaneously and that the arrival process of the tasks follows a nonhomogeneous Poisson process, we analyze the distribution of the number of tasks whose processes can be completed within the processing time limit with the infinite server queueing model. We derive several software performability measures considering the real-time property; these are given as the functions of time and the number of debugging activities. Finally, we illustrate several numerical examples of the measures to investigate the impact of consideration of the performance degradation on the system performability evaluation.

  • Random Checkpoint Models with N Tandem Tasks

    Toshio NAKAGAWA  Kenichiro NARUSE  Sayori MAEJI  

     
    PAPER

      Page(s):
    1572-1577

    We have a job with N tandem tasks each of which is executed successively until it is completed. A double modular system of error detection for the processing of each task is adopted. Either type of checkpoints such as compare-checkpoint or compare-and-store-checkpoint can be placed at the end of tasks. Three schemes for the above process of a job are considered and the mean execution time of each scheme is obtained. Three schemes are compared and the best scheme is determined numerically. As an example, a job with 4 tasks is given and 6 types of schemes are compared numerically. Finally, we consider a majority decision system as an error masking system and compute the mean execution time for three schemes.

  • Efficient Genetic Algorithm for Optimal Arrangement in a Linear Consecutive-k-out-of-n: F System

    Koji SHINGYOCHI  Hisashi YAMAMOTO  

     
    PAPER

      Page(s):
    1578-1584

    A linear consecutive-k-out-of-n: F system is an ordered sequence of n components. This system fails if, and only if, k or more consecutive components fail. Optimal arrangement is one of the main problems for such kind of system. In this problem, we want to obtain an optimal arrangement of components to maximize system reliability, when all components of the system need not have equal component failure probability and all components are mutually statistically independent. As n becomes large, however, the amount of calculation would be too much to solve within a reasonable computing time even by using a high-performance computer. Hanafusa and Yamamoto proposed applying Genetic Algorithm (GA) to obtain quasi optimal arrangement in a linear consecutive-k-out-of-n: F system. GA is known as a powerful tool for solving many optimization problems. They also proposed ordinal representation, which produces only arrangements satisfying the necessary conditions for optimal arrangements and eliminates redundant arrangements with same system reliabilities produced by reversal of certain arrangements. In this paper, we propose an efficient GA. We have modified the previous work mentioned above to allocate components with low failure probabilities, that is to say reliable components, at equal intervals, because such arrangements seem to have relatively high system reliabilities. Through the numerical experiments, we observed that our proposed GA with interval k provides better solutions than the previous work for the most cases.

  • A Cyber-Attack Detection Model Based on Multivariate Analyses

    Yuto SAKAI  Koichiro RINSAKA  Tadashi DOHI  

     
    PAPER

      Page(s):
    1585-1592

    In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.

  • Calculating Method for the System State Distributions of Generalized Multi-State k-out-of-n:F Systems

    Hisashi YAMAMOTO  Tomoaki AKIBA  Hideki NAGATSUKA  

     
    PAPER

      Page(s):
    1593-1599

    In this paper, first, we propose a new recursive algorithm for evaluating generalized multi-state k-out-of-n:F systems. This recursive algorithm can be applied to the systems even though the states of all components in the system are assumed to be non-i.i.d. random variables. Our algorithm is useful for any multi-state k-out-of-n:F system, including the decreasing, increasing and constant multi-state k-out-of-n:F system. Furthermore, our algorithm can evaluate the state distributions of the other non-monotonic multi-state k-out-of-n:F systems. Next, we calculate the order of computing time and memory capacity of the proposed algorithm. We perform numerical experiments in the non-i.i.d. case. The results show that the proposed algorithm is efficient for evaluating the system state distribution of multi-state k-out-of-n:F system when n is large and kl are small.

  • Availability Analysis of a Two-Echelon Repair Model for Systems Comprising Multiple Items

    Nobuyuki TAMURA  Daiki MURAOKA  Tetsushi YUGE  Shigeru YANAGI  

     
    PAPER

      Page(s):
    1600-1607

    This paper considers a two-echelon repair model where several series systems comprising multiple items are operated in each base. We propose a basic model and two modified models. For two models, approximation methods are developed to derive the system availability. The difference between the basic model and the first modified model is whether the normal items in failed series systems are available as spare or not. The second modified model relaxes the assumptions of the first modified model to reflect more realistic situation. We perform numerical analysis for the models to compare their system availabilities and verify the accuracy of the approximation methods.

  • An Efficient Bayesian Estimation of Ordered Parameters of Two Exponential Distributions

    Hideki NAGATSUKA  Toshinari KAMAKURA  Tsunenori ISHIOKA  

     
    PAPER

      Page(s):
    1608-1614

    The situations where several population parameters need to be estimated simultaneously arise frequently in wide areas of applications, including reliability modeling, survival analysis and biological study. In this paper, we propose Bayesian methods of estimation of the ordered parameters of the two exponential populations, which incorporate the prior information about the simple order restriction, but sometimes breaks the order restriction. A simulation study shows that the proposed estimators are more efficient (in terms of mean square errors) than the isotonic regression of the maximum likelihood estimators with equal weights. An illustrative example is finally presented.

  • Software Reliability Modeling Based on Capture-Recapture Sampling

    Hiroyuki OKAMURA  Tadashi DOHI  

     
    PAPER

      Page(s):
    1615-1622

    This paper proposes a dynamic capture-recapture (DCR) model to estimate not only the total number of software faults but also quantitative software reliability from observed data. Compared to conventional static capture-recapture (SCR) model and usual software reliability models (SRMs) in the past literature, the DCR model can handle dynamic behavior of software fault-detection processes and can evaluate quantitative software reliability based on capture-recapture sampling of software fault data. This is regarded as a unified modeling framework of SCR and SRM with the Bayesian estimation. Simulation experiments under some plausible testing scenarios show that our models are superior to SCR and SRMs in terms of estimation accuracy.

  • Regular Section
  • Low Power MAC Design with Variable Precision Support

    Young-Geun LEE  Han-Sam JUNG  Ki-Seok CHUNG  

     
    PAPER-Digital Signal Processing

      Page(s):
    1623-1632

    Many DSP applications such as FIR filtering and DCT (discrete cosine transformation) require multiplication with constants. Therefore, optimizing the performance of constant multiplication improves the overall performance of these applications. It is well-known that shifting can replace a constant multiplication if the constant is a power of two. In this paper, we extend this idea in such a way that by employing more than two barrel shifters, we can design highly efficient constant multipliers. We have found that by using two or three shifters, we can generate a large set of constants. Using these constants, we can execute a typical set of FIR or DCT applications with few errors. Furthermore, with variable precision support, we can carry out a fairly large class of DSP applications with high computational efficiency. Compared to conventional multipliers, we can achieve power savings of up to 56% with negligible computational errors.

  • A Novel Design of Regular Cosine-Modulated Filter Banks for Image Coding

    Toshiyuki UTO  Masaaki IKEHARA  Kenji OHUE  

     
    PAPER-Digital Signal Processing

      Page(s):
    1633-1641

    This paper describes a design method of cosine-modulated filter banks (CMFB's) for an efficient coding of images. Whereas the CMFB has advantages of low design and implementation cost, subband filters of the CMFB do not have linear phase property. This prevents from employing a symmetric extension in transformation process, and leads to a degradation of the image compression performance. However, a recently proposed smooth extension alleviates the problem with CMFB's. As a result, well-designed CMFB's can be expected to be good candidates for a transform block in image compression applications. In this paper, we present a novel design approach of regular CMFB's. After introducing a regularity constraint on lattice parameters of a prototype filter in paraunitary (PU) CMFB's, we also derive a regularity condition for perfect reconstruction (PR) CMFB's. Finally, we design regular 8-channel PUCMFB and PRCMFB by an unconstrained optimization of residual lattice parameters, and several simulation results for test images are compared with various transforms for evaluating the proposed image coder based on the CMFB's with one degree of regularity. In addition, we show a computational complexity of the designed CMFB's.

  • Interacting Self-Timed Pipelines and Elementary Coupling Control Modules

    Kazuhiro KOMATSU  Shuji SANNOMIYA  Makoto IWATA  Hiroaki TERADA  Suguru KAMEDA  Kazuo TSUBOUCHI  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    1642-1651

    The self-timed pipeline (STP) is one of the most promising VLSI/SoC architectures. It achieves efficient utilization of tens of billions of transistors, consumes ultra low power, and is easy-to-design because of its signal integrity and low electro-magnetic interference. These basic features of the STP have been proven by the development of self-timed data-driven multimedia processors, DDMP's. This paper proposes a novel scheme of interacting self-timed (clockless) pipelines by which the various distributed and interconnected pipelines can achieve highly functional stream processing in future giga-transistor chips. The paper also proposes a set of elementary coupling control modules that facilitate various combinations of flow-thru processing between pipelines, and then discusses the practicality of the proposed scheme through the LSI design of application modules such as a priority-based queue, a mutual interconnection network, and a pipelined sorter.

  • Characterizing Intra-Die Spatial Correlation Using Spectral Density Fitting Method

    Qiang FU  Wai-Shing LUK  Jun TAO  Changhao YAN  Xuan ZENG  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    1652-1659

    In this paper, a spectral domain method named the SDF (Spectral Density Fitting) method for intra-die spatial correlation function extraction is presented. Based on theoretical analysis of random field, the spectral density, as the spectral domain counterpart of correlation function, is employed to estimate the parameters of the correlation function effectively in the spectral domain. Compared with the existing extraction algorithm in the original spatial domain, the SDF method can obtain the same quality of results in the spectral domain. In actual measurement process, the unavoidable measurement error with arbitrary frequency components would greatly confound the extraction results. A filtering technique is further developed to diminish the high frequency components of the measurement error and recover the data from noise contamination for parameter estimation. Experimental results have shown that the SDF method is practical and stable.

  • Optimised Versions of the Ate and Twisted Ate Pairings

    Seiichi MATSUDA  Naoki KANAYAMA  Florian HESS  Eiji OKAMOTO  

     
    PAPER-Cryptography and Information Security

      Page(s):
    1660-1667

    We observe a natural generalisation of the ate and twisted ate pairings, which allow for performance improvements in non standard applications of pairings to cryptography like composite group orders. We also give a performance comparison of our pairings and the Tate, ate and twisted ate pairings for certain polynomial families based on operation count estimations and on an implementation, showing that our pairings can achieve a speedup of a factor of up to two over the other pairings.

  • M-Ary Substitution/Deletion/Insertion/Adjacent-Symbol-Transposition Error Correcting Codes for Data Entry Systems

    Haruhiko KANEKO  Eiji FUJIWARA  

     
    PAPER-Coding Theory

      Page(s):
    1668-1676

    Nonbinary M-ary data processed by data entry systems, such as keyboard devices and character recognition systems, often have various types of error, such as symbol-substitution errors, deletion errors, insertion errors, and adjacent-symbol-transposition errors. This paper proposes nonsystematic M-ary codes capable of correcting these errors. The code is defined as a set of codewords that satisfy three conditions required to correct substitution, deletion/insertion, and adjacent-symbol-transposition errors. Since symbol-substitution errors in data entry systems are usually asymmetric, this paper also presents asymmetric-symbol-substitution error correcting codes capable of correcting deletion, insertion, and adjacent-symbol-transposition errors. For asymmetric-symbol-substitution error correction, we employ a mapping derived from the vertex coloring in an error directionality graph. The evaluation shows that the asymmetric codes have three to five times larger number of codewords than the symmetric codes.

  • A Probabilistic Algorithm for Computing the Weight Distribution of LDPC Codes

    Masanori HIROTOMO  Masami MOHRI  Masakatu MORII  

     
    PAPER-Coding Theory

      Page(s):
    1677-1689

    Low-density parity-check (LDPC) codes are linear block codes defined by sparse parity-check matrices. The codes exhibit excellent performance under iterative decoding, and the weight distribution is used to analyze lower error probability of their decoding performance. In this paper, we propose a probabilistic method for computing the weight distribution of LDPC codes. The proposed method efficiently finds low-weight codewords in a given LDPC code by using Stern's algorithm, and stochastically computes the low part of the weight distribution from the frequency of the found codewords. It is based on a relation between the number of codewords with a given weight and the rate of generating the codewords in Stern's algorithm. In the numerical results for LDPC codes of length 504, 1008 and 4896, we could compute the weight distribution by the proposed method with greater accuracy than by conventional methods.

  • A CMOS Spiking Neural Network Circuit with Symmetric/Asymmetric STDP Function

    Hideki TANAKA  Takashi MORIE  Kazuyuki AIHARA  

     
    PAPER-Neural Networks and Bioengineering

      Page(s):
    1690-1698

    In this paper, we propose an analog CMOS circuit which achieves spiking neural networks with spike-timing dependent synaptic plasticity (STDP). In particular, we propose a STDP circuit with symmetric function for the first time, and also we demonstrate associative memory operation in a Hopfield-type feedback network with STDP learning. In our spiking neuron model, analog information expressing processing results is given by the relative timing of spike firing events. It is well known that a biological neuron changes its synaptic weights by STDP, which provides learning rules depending on relative timing between asynchronous spikes. Therefore, STDP can be used for spiking neural systems with learning function. The measurement results of fabricated chips using TSMC 0.25 µm CMOS process technology demonstrate that our spiking neuron circuit can construct feedback networks and update synaptic weights based on relative timing between asynchronous spikes by a symmetric or an asymmetric STDP circuits.

  • Delay Coefficients Based Variable Regularization Subband Affine Projection Algorithms in Acoustic Echo Cancellation Applications

    Karthik MURALIDHAR  Kwok Hung LI  Sapna GEORGE  

     
    LETTER-Engineering Acoustics

      Page(s):
    1699-1703

    To attain good performance in an acoustic echo cancellation system, it is important to have a variable step size (VSS) algorithm as part of an adaptive filter. In this paper, we are concerned with the development of a VSS algorithm for a recently proposed subband affine projection (SAP) adaptive filter. Two popular VSS algorithms in the literature are the methods of delayed coefficients (DC) and variable regularization (VR). However, the merits and demerits of them are mutually exclusive. We propose a VSS algorithm that is a hybrid of both methods and combines their advantages. An extensive study of the new algorithm in different scenarios like the presence double-talk (DT) during the transient phase of the adaptive filter, DT during steady state, and varying DT power is conducted and reasoning is given to support the observed behavior. The importance of the method of VR as part of a VSS algorithm is emphasized.

  • Novel Consecutive-Pilot Design for Phase Noise Suppression in OFDM System

    Fang YANG  Jun WANG  Jintao WANG  Jian Song   Zhixing YANG  

     
    LETTER-Noise and Vibration

      Page(s):
    1704-1707

    In this paper, a novel consecutive-pilot design is proposed to suppress phase noise (PHN) in orthogonal frequency-division multiplexing (OFDM) system. The estimation of PHN is performed by a cross-correlation between the received and locally generated pilots in frequency-domain. Simulations show that the proposed scheme can effectively ameliorate the impairment due to PHN, at the cost of acceptable additional transmission bandwidth and low implementation complexity.

  • Intelligent Controller Implementation for Decreasing Splash in Inverter Spot Welding

    Joon-Ik SON  Young-Do IM  

     
    LETTER-Systems and Control

      Page(s):
    1708-1712

    This study involves implementing an intelligent controller using the fuzzy control algorithm to minimize cold weld and splash in inverter AC spot welding. This study presents an experimental curve of a welding output current and the maximum value of the Instantaneous Heating Rate (IHRmax) using the contact diameter of an electrode as the parameter. It also presents the experimental curve of a welding output current and the slope (S) of the instantaneous dynamic resistance using the instantaneous contact area of an electrode as the parameter. To minimize cold weld and splash, this study proposes an intelligent controller that controls the optimum welding current in real time by estimating the contact diameter of an electrode and the contact area of the initial welding part.

  • Two-Quadrant Compact CMOS Current Divider

    Kuo-Jen LIN  

     
    LETTER-Circuit Theory

      Page(s):
    1713-1715

    A two-quadrant CMOS current divider using a two-variable second-order Taylor series approximation is proposed. The approximation divider is realized with a compact circuit. The simulation results indicate that the compact divider has with sufficient accuracy, small distortion, and high bandwidth for only 1.8 V supply voltage.

  • An Efficient Dynamic Hash Index Structure for NAND Flash Memory

    Chul-Woong YANG  Ki Yong LEE  Myoung Ho KIM  Yoon-Joon LEE  

     
    LETTER-Algorithms and Data Structures

      Page(s):
    1716-1719

    We propose an efficient dynamic hash index structure suitable for a NAND flash memory environment. Since write operations incur significant overhead in NAND flash memory, our design of index structure focuses on minimizing the number of write operations for hash index updates. Through a set of extensive experiments, we show the effectiveness of the proposed hash index structure in a NAND flash memory environment.

  • More Efficient Threshold Signature Scheme in Gap Diffie-Hellman Group

    DaeHun NYANG  Akihiro YAMAMURA  

     
    LETTER-Cryptography and Information Security

      Page(s):
    1720-1723

    By modifying the private key and the public key setting in Boneh-Lynn-Shacham's short signature shcheme, a variation of BLS' short signature scheme is proposed. Based on this variation, we present a very efficient threshold signature scheme where the number of pairing computation for the signaure share verification reduces to half.

  • Cryptanalysis of Chatterjee-Sarkar Hierarchical Identity-Based Encryption Scheme at PKC 06

    Jong Hwan PARK  Dong Hoon LEE  

     
    LETTER-Cryptography and Information Security

      Page(s):
    1724-1726

    In 2006, Chatterjee and Sarkar proposed a hierarchical identity-based encryption (HIBE) scheme which can support an unbounded number of identity levels. This property is particularly useful in providing forward secrecy by embedding time components within hierarchical identities. In this paper we show that their scheme does not provide the claimed property. Our analysis shows that if the number of identity levels becomes larger than the value of a fixed public parameter, an unintended receiver can reconstruct a new valid ciphertext and decrypt the ciphertext using his or her own private key. The analysis is similarly applied to a multi-receiver identity-based encryption scheme presented as an application of Chatterjee and Sarkar's HIBE scheme.

  • Spatial-Temporal Combining-Based ZF Detection in Ultra-Wideband Communications

    Jinyoung AN  Sangchoon KIM  

     
    LETTER-Communication Theory and Signals

      Page(s):
    1727-1730

    The performance of ultra-wideband (UWB) multiple input multiple output (MIMO) receiver based on the RAKE maximal ratio combiner (MRC) followed by a zero forcing (ZF) detector is analytically examined. For a UWB MIMO system with NT transmit antennas, NR receive antennas, and L resolvable multipath components, the proposed MIMO detection scheme is shown to have the diversity order of LNR-NT+1 and its analytical error rate expression is presented in a log-normal fading channel. We also compare the analytical BERs with the simulated results.

  • New Perfect Polyphase Sequences and Mutually Orthogonal ZCZ Polyphase Sequence Sets

    Fanxin ZENG  

     
    LETTER-Spread Spectrum Technologies and Applications

      Page(s):
    1731-1736

    In communication systems, ZCZ sequences and perfect sequences play important roles in removing multiple-access interference (MAI) and synchronization, respectively. Based on an uncorrelated polyphase base sequence set, a novel construction method, which can produce mutually orthogonal (MO) ZCZ polyphase sequence (PS) sets and perfect PSs, is presented. The obtained ZCZ PSs of each set are of ideal periodic cross-correlation functions (PCCFs), in other words, the PCCFs between such two different sequences vanishes, and the sequences between different sets are orthogonal. On the other hand, the proposed perfect PSs include Frank perfect PSs as a special case and the family size of the former is quite larger than that of the latter.

  • Measuring Particles in Joint Feature-Spatial Space

    Liang SHA  Guijin WANG  Anbang YAO  Xinggang LIN  

     
    LETTER-Vision

      Page(s):
    1737-1742

    Particle filter has attracted increasing attention from researchers of object tracking due to its promising property of handling nonlinear and non-Gaussian systems. In this paper, we mainly explore the problem of precisely estimating observation likelihoods of particles in the joint feature-spatial space. For this purpose, a mixture Gaussian kernel function based similarity is presented to evaluate the discrepancy between the target region and the particle region. Such a similarity can be interpreted as the expectation of the spatial weighted feature distribution over the target region. To adapt outburst of object motion, we also present a method to appropriately adjust state transition model by utilizing the priors of motion speed and object size. In comparison with the standard particle filter tracker, our tracking algorithm shows the better performance on challenging video sequences.

  • Statistical Mechanical Analysis of Simultaneous Perturbation Learning

    Seiji MIYOSHI  Hiroomi HIKAWA  Yutaka MAEDA  

     
    LETTER-Neural Networks and Bioengineering

      Page(s):
    1743-1746

    We show that simultaneous perturbation can be used as an algorithm for on-line learning, and we report our theoretical investigation on generalization performance obtained with a statistical mechanical method. Asymptotic behavior of generalization error using this algorithm is on the order of t to the minus one-third power, where t is the learning time or the number of learning examples. This order is the same as that using well-known perceptron learning.