The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.48

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E89-A No.1  (Publication Date:2006/01/01)

    Special Section on Cryptography and Information Security
  • FOREWORD

    Shinichi KAWAMURA  

     
    FOREWORD

      Page(s):
    1-1
  • How to Maximize Software Performance of Symmetric Primitives on Pentium III and 4

    Mitsuru MATSUI  Sayaka FUKUDA  

     
    PAPER-Symmetric Key Cryptography

      Page(s):
    2-10

    This paper studies the state-of-the-art software optimization methodology for symmetric cryptographic primitives on Pentium III and 4 processors. We aim at maximizing speed by considering the internal pipeline architecture of these processors. This is the first paper studying an optimization of ciphers on Prescott, a new core of Pentium 4. Our AES program with 128-bit key achieves 251 cycles/block on Pentium 4, which is, to our best knowledge, the fastest implementation of AES on Pentium 4. We also optimize SNOW2.0 keystream generator. Our program of SNOW2.0 runs at the rate of 2.75 µops/cycle on Pentium III, which seems the most efficient code ever made for a real-world cipher primitive. Our another interest is to optimize cryptographic primitives that essentially utilize 64-bit operations on Pentium processors. For the first example, the FOX128 block cipher, we propose a technique for speeding-up by interleaving two independent blocks using a register group separation. For another examples, we consider fast implementation of SHA512 and Whirlpool. It will be shown that the new SIMD instruction sets introduced in Pentium 4 excellently contribute to fast hashing of SHA512.

  • Relation between the XL Algorithm and Grobner Basis Algorithms

    Makoto SUGITA  Mitsuru KAWAZOE  Hideki IMAI  

     
    PAPER-Symmetric Key Cryptography

      Page(s):
    11-18

    We clarify a relation between the XL algorithm and known Grobner basis algorithms. The XL algorithm was proposed to be a more efficient algorithm to solve a system of algebraic equations under a special condition, without calculating a whole Grobner basis. But in our result, it is shown that to solve a system of algebraic equations with a special condition under which the XL algorithm works is equivalent to calculate the reduced Grobner basis of the ideal associated with the system. Moreover we show that the XL algorithm is a Grobner basis algorithm which can be represented as a redundant variant of a known Grobner basis algorithm F4.

  • Quadratic Equations from APN Power Functions

    Jung Hee CHEON  Dong Hoon LEE  

     
    PAPER-Symmetric Key Cryptography

      Page(s):
    19-27

    We develop several tools to derive quadratic equations from algebraic S-boxes and to prove their linear independence. By applying them to all known almost perfect nonlinear (APN) power functions and the inverse function, we can estimate the resistance against algebraic attacks. As a result, we can show that APN functions have different resistance against algebraic attacks, and especially S-boxes with Gold or Kasami exponents have very weak resistance.

  • A Universally Composable Secure Channel Based on the KEM-DEM Framework

    Waka NAGAO  Yoshifumi MANABE  Tatsuaki OKAMOTO  

     
    PAPER-Public Key Cryptography

      Page(s):
    28-38

    As part of ISO standards on public-key encryption, Shoup introduced the framework of KEM (Key Encapsulation Mechanism), and DEM (Data Encapsulation Mechanism), for formalizing and realizing one-directional hybrid encryption; KEM is a formalization of asymmetric encryption specified for key distribution, which DEM is a formalization of symmetric encryption. This paper investigates a more general hybrid protocol, secure channel, that uses KEM and DEM, while KEM supports distribution of a session key and DEM, along with the session key, is used for multiple bi-directional encrypted transactions in a session. This paper shows that KEM, which is semantically secure against adaptively chosen ciphertext attacks (IND-CCA2), and DEM, which is semantically secure against adaptively chosen plaintext/ciphertext attacks (IND-P2-C2), along with secure signatures and ideal certification authority are sufficient to realize a universally composable (UC) secure channel. To obtain the main result, this paper also shows several equivalence results: UC KEM, IND-CCA2 KEM and NM-CCA2 (non-malleable against CCA2) KEM are equivalent, and UC DEM, IND-P2-C2 DEM and NM-P2-C2 DEM are equivalent.

  • k-Resilient Identity-Based Encryption in the Standard Model

    Swee-Huay HENG  Kaoru KUROSAWA  

     
    PAPER-Public Key Cryptography

      Page(s):
    39-46

    We present and analyze an adaptive chosen ciphertext secure (IND-CCA) identity-based encryption scheme (IBE) based on the well studied Decisional Diffie-Hellman (DDH) assumption. The scheme is provably secure in the standard model assuming the adversary can corrupt up to a maximum of k users adaptively. This is contrary to the Boneh-Franklin scheme which holds in the random-oracle model.

  • A Construction of Public-Key Cryptosystem Using Algebraic Coding on the Basis of Superimposition and Randomness

    Masao KASAHARA  

     
    PAPER-Public Key Cryptography

      Page(s):
    47-54

    In this paper, we present a new class of public-key cryptosystem (PKC) using algebraic coding on the basis of superimposition and randomness. The proposed PKC is featured by a generator matrix, in a characteristic form, where the generator matrix of an algebraic code is repeatedly used along with the generator matrix of a random code, as sub-matrices. This generator matrix, in the characteristic form, will be referred to as K-matrix. We show that the K-matrix yields the following advantages compared with the conventional schemes: (i) It realizes an abundant supply of PKCs, yielding more secure PKCs, (ii) It realizes a short public key.

  • Plaintext Simulatability

    Eiichiro FUJISAKI  

     
    PAPER-Public Key Cryptography

      Page(s):
    55-65

    We propose a new security class, called plaintext simulatability, defined over the public-key encryption schemes. The notion of plaintext simulatability (denoted PS) is similar to the notion of plaintext awareness (denoted PA) defined in [3], but it is "properly" a weaker security class for public-key encryption. It is known that PA implies the class of CCA2-secure encryption (denoted IND-CCA2) but not vice versa. In most cases, PA is "unnecessarily" strong--In such cases, PA is only used to study that the public-key encryption scheme involved meets IND-CCA2, because it looks much easier to treat the membership of PA than to do "directly" the membership of IND-CCA2. We show that PS also implies IND-CCA2, while preserving such a technical advantage as well as PA. We present two novel CCA2-secure public-key encryption schemes, which should have been provided with more complicated security analyses. One is a random-oracle version of Dolev-Dwork-Naor's encryption scheme [8],[9]. Unlike the original scheme, this construction is efficient. The other is a public-key encryption scheme based on a strong pseudo-random permutation family [16] which provides the optimal ciphertext lengths for verifying the validity of ciphertexts, i.e., (ciphertext size) = (message size) + (randomness size). According to [19], such a construction remains open. Both schemes meet PS but not PA.

  • Schemes for Encryption with Anonymity and Ring Signature

    Ryotaro HAYASHI  Keisuke TANAKA  

     
    PAPER-Public Key Cryptography

      Page(s):
    66-73

    In this paper, we present previously unproposed schemes for encryption with anonymity and ring signature by applying two techniques. That is, we construct a key-privacy encryption scheme by using N-ary representation, and a ring signature scheme by using the repetition of evaluation of functions. We analyze precisely the properties of these schemes and show their advantage and disadvantage.

  • Efficient Identification of Bad Signatures in RSA-Type Batch Signature

    Seungwon LEE  Seongje CHO  Jongmoo CHOI  Yookun CHO  

     
    PAPER-Digital Signature

      Page(s):
    74-80

    As the use of electronic voting systems and e-commerce systems increases, the efficient batch verification of digital signatures becomes more and more important. In this paper, we first propose a new method to identify bad signatures in batches efficiently for the case when the batch contains one bad signature. The method can find out the bad signature using smaller number of modular multiplications than the existing method. We also propose an extension to the proposed method to find out two or more bad signatures in a batch instance. Experimental results show that our method yields better performance than the existing method in terms of the number of modular multiplications.

  • Generalized Powering Functions and Their Application to Digital Signatures

    Hisayoshi SATO  Tsuyoshi TAKAGI  Satoru TEZUKA  Kazuo TAKARAGI  

     
    PAPER-Digital Signature

      Page(s):
    81-89

    This paper investigates some modular powering functions suitable for cryptography. It is well known that the Rabin encryption function is a 4-to-1 mapping and breaking its one-wayness is secure under the factoring assumption. The previously reported encryption schemes using a powering function are variants of either the 4-to-1 mapping or higher n-to-1 mapping, where n > 4. In this paper, we propose an optimized powering function that is a 3-to-1 mapping using a p2q-type modulus. The one-wayness of the proposed powering function is as hard as the infeasibility of the factoring problem. We present an efficient algorithm for computing the decryption for a p2q-type modulus, which requires neither modular inversion nor division. Moreover, we construct new provably secure digital signatures as an application of the optimized functions. In order to achieve provable security in the random oracle model, we usually randomize a message using random hashing or padding. However, we have to compute the randomization again if the randomized message is a non-cubic residue element--it is inefficient for long messages. We propose an algorithm that can deterministically find the unique cubic residue element for a randomly chosen element.

  • Conversion Schemes for Unlinkable Signatures That Include Revocable Unlinkability

    Koji CHIDA  

     
    PAPER-Digital Signature

      Page(s):
    90-98

    This paper introduces the concept of "revocable unlinkability" for unlinkable anonymous signatures and proposes a generalized scheme that modifies the signatures to include revocable unlinkability. Revocable unlinkability provides a condition in which multiple messages signed using an unlinkable anonymous signature are unlinkable for anyone except the unlinkability revocation manager. Noteworthy is that the identifier of the signer is kept secret from the manager. In addition, examples are presented in which the proposed scheme is applied to existing group/ring signatures. The proposed scheme employs a verifiable MIX-net to shuffle the identifiers of all potential signers, thus giving it the potential for wide application to unlinkable anonymous signatures.

  • Flaws in Robust Optimistic Mix-Nets and Stronger Security Notions

    Masayuki ABE  Hideki IMAI  

     
    PAPER-Protocol

      Page(s):
    99-105

    Contribution of this paper is twofold: First we introduce weaknesses of two Mix-nets claimed to be robust in the literature. Since such flaws are due to their weak security definitions, we then present a stronger security definition by regarding a Mix-net as a batch decryption algorithm of a CCA secure public-key encryption scheme. We show two concrete attacks on the schemes proposed in [1] and [2]. The scheme in [1] loses anonymity in the presence of a malicious user even though all servers are honest. The scheme in [2] also loses anonymity through the collaboration of a malicious user and the first server. In the later case the user can identify the plaintext sent from the targeted user by invoking two mix sessions at the risk of the colluding server receiving an accusation. We also point out that in a certain case, anonymity is violated solely by the user without colluding to any server. Heuristic repairs are provided for both schemes.

  • Authorization-Limited Transformation-Free Proxy Cryptosystems and Their Security Analyses

    Lihua WANG  Zhenfu CAO  Takeshi OKAMOTO  Ying MIAO  Eiji OKAMOTO  

     
    PAPER-Protocol

      Page(s):
    106-114

    In this paper authorization-limited transformation-free proxy cryptosystems (AL-TFP systems) are studied. It is a modification of the original proxy cryptosystem introduced by Mambo et al.[8] in which a ciphertext transformation by the original decryptor is necessary, and also a modification of the delegated decryption system proposed by Mu et al.[10]. In both systems proposed in [8] and [10], the original decryptors have to trust their proxies completely. The AL-TFP system allows the proxy decryptor to do decryption directly without any ciphertext transformation from the original decryptor, so that it can release the original decryptor more efficiently from a large amount of decrypting operations. Moreover, the original decryptor's privacy can be protected efficiently because the authority of proxy decryptor is limited to his duty and valid period. An active identity-based and a directory-based AL-TFP systems from pairings are proposed. Furthermore, an application of directory-based AL-TFP system to electronic commerce is also described. The securities of our schemes introduced are based on the BDH assumption.

  • Radix-r Non-Adjacent Form and Its Application to Pairing-Based Cryptosystem

    Tsuyoshi TAKAGI  David REIS, Jr.  Sung-Ming YEN  Bo-Ching WU  

     
    PAPER-Elliptic Curve Cryptography

      Page(s):
    115-123

    Recently, the radix-3 representation of integers is used for the efficient implementation of pairing based cryptosystems. In this paper, we propose non-adjacent form of radix-r representation (rNAF) and efficient algorithms for generating rNAF. The number of non-trivial digits is (r-2)(r+1)/2 and its average density of non-zero digit is asymptotically (r-1)/(2r-1). For r=3, the non-trivial digits are {2, 4} and the non-zero density is 0.4. We then investigate the width-w version of rNAF for the general radix-r representation, which is a natural extension of the width-w NAF. Finally we compare the proposed algorithms with the generalized NAF (gNAF) discussed by Joye and Yen. The proposed scheme requires a larger table but its non-zero density is smaller even for large radix. We explain that gNAF is a simple degeneration of rNAF--we can consider that rNAF is a canonical form for the radix-r representation. Therefore, rNAF is a good alternative to gNAF.

  • A New Type of Fast Endomorphisms on Jacobians of Hyperelliptic Curves and Their Cryptographic Application

    Katsuyuki TAKASHIMA  

     
    PAPER-Elliptic Curve Cryptography

      Page(s):
    124-133

    The Gallant-Lambert-Vanstone method [14](GLV method for short) is a scalar multiplication method for elliptic curve cryptography (ECC). In WAP WTLS [49], SEC 2 [44], ANSI X9.62 [1] and X9.63 [2], several domain parameters for applications of the GLV method are described. Curves with those parameters have efficiently-computable endomorphisms. Recently the GLV method for Jacobians of hyperelliptic curve (HEC) has also been studied. In this paper, we discuss applications of the GLV method to curves with real multiplication (RM). It is the first time to use RM for efficient scalar multiplication as far as we know. We describe the general algorithm for using such RM, and we show that some genus 2 curves with RM have enough effciency to be used in the GLV method as in the previous CM case. Moreover, we will see that such RM curves can be obtained abundantly unlike the previously proposed CM curves of genus 2.

  • Efficient Algorithms for Tate Pairing

    Tetsutaro KOBAYASHI  Kazumaro AOKI  Hideki IMAI  

     
    PAPER-Elliptic Curve Cryptography

      Page(s):
    134-143

    This paper presents new algorithms for the Tate pairing on a prime field. Recently, many pairing-based cryptographic schemes have been proposed. However, computing pairings incurs a high computational cost and represents the bottleneck to using pairings in actual protocols. This paper shows that the proposed algorithms reduce the cost of multiplication and inversion on an extension field, and reduce the number of calculations of the extended finite field. This paper also discusses the optimal algorithm to be used for each pairing parameter and shows that the total computational cost is reduced by 50% if k = 6 and 57% if k = 8.

  • Candidate One-Way Functions on Non-Supersingular Elliptic Curves

    Taiichi SAITO  Fumitaka HOSHINO  Shigenori UCHIYAMA  Tetsutaro KOBAYASHI  

     
    PAPER-Elliptic Curve Cryptography

      Page(s):
    144-150

    This paper proposes new candidate one-way functions constructed with a certain type of endomorphisms on non-supersingular elliptic curves. We can show that the one-wayness of our proposed functions is equivalent to some special cases of the co-Diffie-Hellman assumption. Also a digital signature scheme is explicitly described using our proposed functions.

  • Efficient Hyperelliptic Curve Cryptosystems Using Theta Divisors

    Masanobu KATAGI  Toru AKISHITA  Izuru KITAMURA  Tsuyoshi TAKAGI  

     
    PAPER-Elliptic Curve Cryptography

      Page(s):
    151-160

    It has recently been reported that the performance of hyperelliptic curve cryptosystems (HECC) is competitive to that of elliptic curve cryptosystems (ECC). Concerning the security of HECC, the theta divisors play an important role. The scalar multiplication using a random base point is vulnerable to an exceptional procedure attack, which is a kind of side-channel attacks, using theta divisors. In the case of cryptographic protocols of the scalar multiplication using fixed base point, however, the exceptional procedure attack is not applicable. First, we present novel efficient scalar multiplication using theta divisors, which is the positive application of theta divisors on HECC. Second, we develop a window-based method using theta divisors that is secure against side-channel attacks. It is not obvious how to construct a base point D such that all pre-computed points are theta divisors. We present an explicit algorithm for generating such divisors.

  • Security Analysis of the SPA-Resistant Fractional Width Method

    Katsuyuki OKEYA  Tsuyoshi TAKAGI  Camille VUILLAUME  

     
    PAPER-Elliptic Curve Cryptography

      Page(s):
    161-168

    Elliptic curves offer interesting possibilities for alternative cryptosystems, especially in constrained environments like smartcards. However, cryptographic routines running on such lightweight devices can be attacked with the help of "side channel information"; power consumption, for instance. Elliptic curve cryptosystems are not an exception: if no precaution is taken, power traces can help attackers to reveal secret information stored in tamper-resistant devices. Okeya-Takagi scheme (OT scheme) is an efficient countermeasure against such attacks on elliptic curve cryptosystems, which has the unique feature to allow any size for the pre-computed table: depending on how much memory is available, users can flexibly change the table size to fit their needs. Since the nature of OT scheme is different from other side-channel attack countermeasures, it is necessary to deeply investigate its security. In this paper, we present a comprehensive security analysis of OT scheme, and show that based on information leaked by power consumption traces, attackers can slightly enhance standard attacks. Then, we explain how to prevent such information leakage with simple and efficient modifications.

  • Best Security Index for Digital Fingerprinting

    Kozo BANNO  Shingo ORIHARA  Takaaki MIZUKI  Takao NISHIZEKI  

     
    PAPER-Information Hiding

      Page(s):
    169-177

    Digital watermarking used for fingerprinting may receive a collusion attack; two or more users collude, compare their data, find a part of embedded watermarks, and make an unauthorized copy by masking their identities. In this paper, assuming that at most c users collude, we give a characterization of the fingerprinting codes that have the best security index in a sense of "(c,p/q)-secureness" proposed by Orihara et al. The characterization is expressed in terms of intersecting families of sets. Using a block design, we also show that a distributor of data can only find asymptotically a set of c users including at least one culprit, no matter how good fingerprinting code is used.

  • Multi-Matcher On-Line Signature Verification System in DWT Domain

    Isao NAKANISHI  Hiroyuki SAKAMOTO  Naoto NISHIGUCHI  Yoshio ITOH  Yutaka FUKUI  

     
    PAPER-Information Hiding

      Page(s):
    178-185

    This paper presents a multi-matcher on-line signature verification system which fuses the verification scores in pen-position parameter and pen-movement angle one at total decision. Features of pen-position and pen-movement angle are extracted by the sub-band decomposition using the Discrete Wavelet Transform (DWT). In the pen-position, high frequency sub-band signals are considered as individual features to enhance the difference between a genuine signature and its forgery. On the other hand, low frequency sub-band signals are utilized as features for suppressing the intra-class variation in the pen-movement angle. Verification is achieved by the adaptive signal processing using the extracted features. Verification scores in the pen-position and the pen-movement angle are integrated by using a weighted sum rule to make total decision. Experimental results show that the fusion of pen-position and pen-movement angle can improve verification performance.

  • Correlation-Based Video Watermarking Method Using Inter-Frame Similarity

    Motoo YAMAMOTO  Akira SHIOZAKI  Motoi IWATA  Akio OGIHARA  

     
    PAPER-Information Hiding

      Page(s):
    186-193

    This paper presents a correlation-based watermarking method for video using the similarity of adjacent frames. In general, the adjacent frames of a video sequence is very similar. In the proposed scheme, we use an adjoining frame in detection process instead of an original image in the watermarking scheme of Cox et al. So the proposed method does not need an original video sequence in detection process. When a watermarked video sequence is attacked by overwriting copy or frame dropping, the pair of the frames that is not adjoining in an original video sequence is used in detection process. However, since a watermark is embedded in a part of each frame and embedding positions are different for each frame in the proposed method, we can detect the watermark even from an overwriting-copied video sequence and a frame-dropped video sequence. Experimental results show that the proposed method is robust against overwriting copy and frame dropping. Moreover, it is shown from experimental results that the method has robustness to low bitrate MPEG compression and StirMark attack.

  • A Selective Video Encryption Scheme for MPEG Compression Standard

    Gang LIU  Takeshi IKENAGA  Satoshi GOTO  Takaaki BABA  

     
    PAPER-Application

      Page(s):
    194-202

    With the increase of commercial multimedia applications using digital video, the security of video data becomes more and more important. Although several techniques have been proposed in order to protect these video data, they provide limited security or introduce significant overhead. This paper proposes a video security scheme for MPEG video compression standard, which includes two methods: DCEA (DC Coefficient Encryption Algorithm) and "Event Shuffle." DCEA is aim to encrypt group of codewords of DC coefficients. The feature of this method is the usage of data permutation to scatter the ciphertexts of additional codes in DC codewords. These additional codes are encrypted by block cipher previously. With the combination of these algorithms, the method provides enough security for important DC component of MPEG video data. "Event Shuffle" is aim to encrypt the AC coefficients. The prominent feature of this method is a shuffling of AC events generated after DCT transformation and quantization stages. Experimental results show that these methods introduce no bit overhead to MPEG bit stream while achieving low processing overhead to MPEG codec.

  • A Cramer-Shoup Variant Related to the Quadratic Residuosity Problem

    Harunaga HIWATARI  Keisuke TANAKA  

     
    LETTER-Public Key Cryptography

      Page(s):
    203-205

    At Eurocrypt '02, Cramer and Shoup [1] proposed a general paradigm to construct practical public-key encryption schemes secure against the adaptive chosen ciphertext attack as well as several concrete examples. One of these example is the scheme based on the quadratic residuosity (QR) problem. However this scheme is less efficient than the other examples. In this paper, we construct a new variant of the Cramer-Shoup encryption scheme which is related to the QR problem. Our variant is more efficient than the scheme based on the QR problem.

  • Security Analysis of Signcryption Scheme from q-Diffie-Hellman Problems

    Chik-How TAN  

     
    LETTER-Public Key Cryptography

      Page(s):
    206-208

    In this paper, we analyse the Libert-Quisquater's q-DH signcryption scheme proposed in SCN'2004. Although the paper proved that their scheme is secure against adaptive chosen ciphertext attacks in the random oracle model, we disprove their claim and show that their scheme is not even secure against non-adaptive chosen ciphtertext attacks, which is the weaker security than the adaptive chosen ciphertext attacks. We further show that the semantically secure symmetric encryption scheme defined in their paper is not sufficient to guarantee their signcryption scheme to be secure against adaptive chosen ciphertext attacks.

  • Attack on the Sun-Chen-Hwang's Three-Party Key Agreement Protocols Using Passwords

    Junghyun NAM  Seungjoo KIM  Dongho WON  

     
    LETTER-Protocol

      Page(s):
    209-212

    We show that Sun-Chen-Hwang's three-party key agreement protocols using passwords are insecure against an active adversary. Further, we recommend a small change to the protocols that fixes the security problem.

  • Weakness in Jung et al.'s ID-Based Conference Key Distribution Scheme

    Junghyun NAM  Seungjoo KIM  Dongho WON  

     
    LETTER-Protocol

      Page(s):
    213-218

    In 2000, Xu and Tilborg proposed an ID-based conference key distribution scheme which builds on earlier work of Harn and Yang in the 2-party setting. Recently, Jung et al. have discovered security flaws in the Xu-Tilborg scheme and proposed an improvement of this scheme to fix the security flaws. However, Jung et al.'s improvement introduces another security weakness. We demonstrate this by showing that the improved scheme is vulnerable to a parallel session attack mounted by two colluding adversaries. Further, we recommend changes to the scheme that address this vulnerability.

  • Redundancy in Instruction Sequences of Computer Programs

    Kazuhiro HATTANDA  Shuichi ICHIKAWA  

     
    LETTER-Information Hiding

      Page(s):
    219-221

    There is redundancy in instruction sequences, which can be utilized for information hiding or digital watermarking. This study quantitatively examines the information capacity in the order of variables, basic blocks, and instructions in each basic block. Derived information density was 0.3% for reordering of basic blocks, 0.3% for reordering instructions in basic blocks, and 0.02% for reordering of global variables. The performance degradation caused by this method was less than 6.1%, and the increase in the object file size was less than 5.1%.

  • Evaluation of Mutational Capability and Real-Time Applicability of Obfuscation Techniques

    Shinsaku KIYOMOTO  Toshiaki TANAKA  

     
    LETTER-Information Hiding

      Page(s):
    222-226

    This paper reports on an evaluation result of current obfuscation techniques for Java byte code, such as Collberg's techniques in terms of mutational capability, real-time applicability, and program size increase. We suggest effective obfuscation techniques for random and real-time obfuscation (RR obfuscation). In the evaluation results, the combination of some obfuscation techniques was found to be useful for RR obfuscation, and some obfuscation techniques makes little or no difference after a certain threshold.

  • Obtaining Traceability Codes from Chinese Reminder Theorem Codes

    Marcel FERNANDEZ  Miguel SORIANO  Josep COTRINA  

     
    LETTER-Information Hiding

      Page(s):
    227-230

    Traceability codes are used in schemes that prevent illegal redistribution of digital content. In this Letter, we use Chinese Reminder Theorem codes to construct traceability codes. Both the code parameters and the traitor identification process take into account the non-uniformity of the alphabet of Chinese Reminder Theorem codes. Moreover it is shown that the identification process can be done in polynomial time using list decoding techniques.

  • Simple Power Analysis on Fast Modular Reduction with Generalized Mersenne Prime for Elliptic Curve Cryptosystems

    Yasuyuki SAKAI  Kouichi SAKURAI  

     
    LETTER-Side Channel Analysis

      Page(s):
    231-237

    We discuss side channel leakage from modular reduction for NIST recommended domain parameters. FIPS 186-2 has 5 recommended prime fields. These primes have a special form which is referred to as generalized Mersenne prime. These special form primes facilitate especially efficient implementation. A typical implementation of efficient modular reduction with such primes includes conditional reduction. A conditional reduction in modular reduction can constitute an information channel on the secret exponent. Several researchers have produced unified code for elliptic point addition and doubling in order to avoid a simple power analysis (SPA). However, Walter showed that SPA still be possible if Montgomery multiplication with conditional reduction is implemented within the unified code. In this paper we show SPA on the modular reduction with NIST recommended primes, combining with the unified code for elliptic point operations. As Walter stated, our results also indicate that even if the unified codes are implemented for elliptic point operations, underlying field operations should be implemented in constant time. The unified approach in itself can not be a countermeasure for side channel attacks.

  • uT-RBAC: Ubiquitous Role-Based Access Control Model

    Song-hwa CHAE  Wonil KIM  Dong-Kyoo KIM  

     
    LETTER-Access Control

      Page(s):
    238-239

    In ubiquitous environment that users access resource anytime and anywhere, access control model should consider user's location information. The proposed uT-RBAC includes the location information for user's least privilege. It also supports time related information, which enables the access control model to accommodate various ubiquitous environments. The proposed uT-RBAC can be dynamically applied to various ubiquitous computing envrionment.

  • Regular Section
  • Common Acoustical Pole Estimation from Multi-Channel Musical Audio Signals

    Takuya YOSHIOKA  Takafumi HIKICHI  Masato MIYOSHI  Hiroshi G. OKUNO  

     
    PAPER-Engineering Acoustics

      Page(s):
    240-247

    This paper describes a method for estimating the amplitude characteristics of poles common to multiple room transfer functions from musical audio signals received by multiple microphones. Knowledge of these pole characteristics would make it easier to manipulate audio equalizers, since they correspond to the room resonance. It has been proven that an estimate of the poles can be calculated precisely when a source signal is white. However, if a source signal is colored as in the case of a musical audio signal, the estimate is degraded by the frequency characteristics originally contained in the source signal. In this paper, we consider that an amplitude spectrum of a musical audio signal consists of its envelope and fine structure. We assume that musical pieces can be classified into several categories according to their average amplitude spectral envelopes. Based on this assumption, the amplitude spectral envelope of the musical audio signal can be obtained from prior knowledge of the average amplitude spectral envelope of a musical piece category into which the target piece is classified. On the other hand, the fine structure is identified based on its time variance. By removing both the spectral envelope and the fine structure from the amplitude spectrum estimated with the conventional method, the amplitude characteristics of the acoustical poles can be extracted. Simulation results for 20 popular songs revealed that our method was capable of estimating the amplitude characteristics of the acoustical poles with a spectral distortion of 3.11 dB. In particular, most of the spectral peaks, corresponding to the room resonance modes, were successfully detected.

  • Coefficients--Delay Simultaneous Adaptation Scheme for Linear Equalization of Nonminimum Phase Channels

    Yusuke TSUDA  Jonah GAMBA  Tetsuya SHIMAMURA  

     
    PAPER-Digital Signal Processing

      Page(s):
    248-259

    An efficient adaptation technique of the delay is introduced for accomplishing more accurate adaptive linear equalization of nonminimum phase channels. It is focused that the filter structure and adaptation procedure of the adaptive Butler-Cantoni (ABC) equalizer is very suitable to deal with a variable delay for each iteration, compared with a classical adaptive linear transversal equalizer (LTE). We derive a cost function by comparing the system mismatch of an optimum equalizer coefficient vector with an equalizer coefficient vector with several delay settings. The cost function is square of difference of absolute values of the first element and the last element for the equalizer coefficient vector. The delay adaptation method based on the cost function is developed, which is involved with the ABC equalizer. The delay is adapted by checking the first and last elements of the equalizer coefficient vector and this results in an LTE providing a lower mean square error level than the other LTEs with the same order. We confirm the performance of the ABC equalizer with the delay adaptation method through computer simulations.

  • A MPBSG Technique Based Parallel Dual-Type Method Used for Solving Distributed Optimal Power Flow Problems

    Huay CHANG  Shieh-Shing LIN  

     
    PAPER-Systems and Control

      Page(s):
    260-269

    In this paper, we propose a method to solve the distributed optimal power flow problem and discuss the associated implementation. We have combined this method with a projected Jacobi (PJ) method and a modified parallel block scaled gradient (MPBSG) method possessing decomposition effects. With the decomposition, our method can be parallel processed and is computationally efficient. We have tested our method for distributed OPF problems on numerous power systems. As seen from the simulation results, our method achieved a dramatic speed-up ratio compared with the commercial IMSL subroutines.

  • Approximation and Analysis of Non-linear Equations in a Moment Vector Space

    Hideki SATOH  

     
    PAPER-Nonlinear Problems

      Page(s):
    270-279

    Moment vector equations (MVEs) are presented for use in approximating and analyzing multi-dimensional non-linear discrete- and continuous-time equations. A non-linear equation is expanded into simultaneous equations of generalized moments and then reduced to an MVE of a coefficient matrix and a moment vector. The MVE can be used to analyze the statistical properties, such as the mean, variance, covariance, and power spectrum, of the non-linear equation. Moreover, we can approximately express a combination of non-linear equations by using a combination of MVEs of the equations. Evaluation of the statistical properties of Lorenz equations and of a combination of logistic equations based on the MVE approach showed that MVEs can be used to approximate non-linear equations in statistical measurements.

  • A Cost-Effective Handshake Protocol and Its Implementation for Bundled-Data Asynchronous Circuits

    Masakazu SHIMIZU  Koki ABE  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    280-287

    We propose and implement a four-phase handshake protocol for bundled-data asynchronous circuits with consideration given to power consumption and area. A key aspect is that our protocol uses three phases for generating the matched delay to signal the completion of the data-path stage operation whereas conventional methods use only one phase. A comparison with other protocols at 0.18 µm process showed that our protocol realized lower power consumption than any other protocol at cycle times of 1.2 ns or more. The area of the delay generator required for a given data-path delay was less than half that of other protocols. The overhead of the timing generator was the same as or less than that of other protocols.

  • Optimal Workload for a Multi-Tasking k-out-of-n:G Load Sharing System

    Ji Hwan CHA  Hisashi YAMAMOTO  Won Young YUN  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Page(s):
    288-296

    In this paper the problem of determining optimal workload for a load sharing system is considered. The system is composed of total n components and it functions until (n-k+1) components are failed. The works that should be performed by the system arrive at the system according to a homogeneous Poisson process and it is assumed that the system can perform sufficiently large number of works simultaneously. The system is subject to a workload which can be expressed in terms of the arrival rate of the work and the workload is equally shared by surviving components in the system. We assume that an increased workload induces a higher failure rate of each remaining component. The time consumed for the completion of each work is assumed to be a constant or a random quantity following an Exponential distribution. Under this model, as a measure for system performance, we derive the long-run average number of works performed per unit time and consider optimal workload which maximizes the system performance.

  • Practical Implementations of a Non-disclosure Fair Contract Signing Protocol

    Chih-Hung WANG  Chih-Heng YIN  

     
    PAPER-Information Security

      Page(s):
    297-309

    Contract signing is a practical application of the fair exchange of digital signatures. This application used to be realized by directly adopting the results of the fair exchange of signatures, which do not completely meet the requirements of the signing of a secret contract. The assistance of a trusted third party (TTP) and some cryptographic technology are required to allow two parties to exchange their signatures through the network in a fair manner because these two parties potentially may be dishonest or mistrust each other. This paper presents a subtle method of preventing the off-line TTP from gaining the exchanged signature and the corresponding message when a dispute occurs between the two parties wherein the TTP is required to take part in the exchange procedure. An advanced concept, the non-disclosure property, is proposed in order to prevent a party from misusing evidence left during the exchange process. Two approaches, namely the secret divide method and the convertible signature are demonstrated. To satisfy the properties of the traditional paper-based contract signing, the technique of multi-signature scheme is used in the proposed protocols.

  • Design and Experimental Evaluation of Improved Least Squares and Weighted Least Squares Quadrature Mirror Filters

    A.P. VINOD  

     
    LETTER-Digital Signal Processing

      Page(s):
    310-315

    The least squares (LS) and the weighted least squares (WLS) algorithms are well known procedures that are used in the design of quadrature mirror filters (QMFs). It is known that these design techniques suffer from pass-band anomaly under certain conditions. A recent method, that overcomes pass-band anomaly for LS QMFs using a frequency sampling design for the initial filter, is extended to WLS design in this letter. A comparison between the modified LS and WLS designs based on experimental results is presented. Although WLS designs have been reported to have superior near-equiripple stop-band performance as compared to LS designs, we find that this is not always true. Specifically, LS designs, with inherent computational and robustness advantages, also have better peak stop-band ripple and transition bandwidth at higher cut-off frequencies than WLS.

  • QFB Low-Delay Design Satisfying Perfect-Reconstruction

    Her-Chang CHAO  Shih-Jen YANG  

     
    LETTER-Digital Signal Processing

      Page(s):
    316-320

    In this letter, we present a new numerical design method for 2-D FIR quincunx filter banks (QFB) with low-delay, equiripple magnitude response, and perfect reconstruction (PR). The necessary conditions for the system delay of QFB are derived. The dual affine scaling variant of Karmarkar's algorithm is employed to minimize the peak ripples of analysis filters, and a linearization scheme is introduced to satisfy the PR constraint for QFB. We have included several simulation examples to show the efficacy of this proposed design technique.

  • Recursive Computation of Wiener-Khintchine Theorem and Bispectrum

    Khalid Mahmood AAMIR  Mohammad Ali MAUD  Arif ZAMAN  Asim LOAN  

     
    LETTER-Digital Signal Processing

      Page(s):
    321-323

    Power Spectral Density (PSD) computed by taking the Fourier transform of auto-correlation functions (Wiener-Khintchine Theorem) gives better result, in case of noisy data, as compared to the Periodogram approach in case the signal is Gaussian. However, the computational complexity of Wiener-Khintchine approach is more than that of the Periodogram approach. For the computation of short time Fourier transform (STFT), this problem becomes even more prominent where computation of PSD is required after every shift in the window under analysis. This paper presents a recursive form of PSD to reduce the complexity. If the signal is not Gaussian, the PSD approach is insufficient and we estimate the higher order spectra of the signal. Estimation of higher order spectra is even more time consuming. In this paper, recursive versions for computation of bispectrum has been presented as well. The computational complexity of PSD and bispectrum for a window size of N, are O(N) and O(N2) respectively.

  • A Novel Image Segmentation Approach Based on Particle Swarm Optimization

    Chih-Chin LAI  

     
    LETTER-Digital Signal Processing

      Page(s):
    324-327

    Image segmentation denotes a process by which an image is partitioned into non-intersecting regions and each region is homogeneous. Utilizing histogram information to aim at segmenting an image is a commonly used method for many applications. In this paper, we view the image segmentation as an optimization problem. We find a curve which gives the best fit to the given image histogram, and the parameters in the curve are determined by using the particle swarm optimization algorithm. The experimental results to confirm the proposed approach are also included.

  • A Design Method of Parallel Fast RLS Second-Order Adaptive Volterra Filter

    Xueqin ZHAO  Jianming LU  Takashi YAHAGI  

     
    LETTER-Nonlinear Problems

      Page(s):
    328-333

    The adaptive Volterra filter (AVF) is attractive in adaptive filtering applications because its expansion is a linear combination of the input and output signals. However, the formidable computational work of AVF is prohibitive for practical applications. In this letter, we present a parallel fast recursive least squares (RLS) second-order adaptive Volterra filter (PAVF) to reduce computational load. Our discussion is based on the approach of the fast RLS AVF [3], by which the computational complexity has been reduced to O(N3) multiplications per time instant, where O(·) denotes "order of," and N is the filter length. Proposed PAVF consists of several subfilters partitioned from the conventional AVF, with parallel implementation, the computational work can be reduced effectively. Several simulation results are presented to validate the proposed method.

  • Refined Computations for Points of the Form 2kP Based on Montgomery Trick

    Daisuke ADACHI  Tomio HIRATA  

     
    LETTER-Information Security

      Page(s):
    334-339

    This paper focuses on algorithms for an efficient scalar multiplication. It proposes two algorithms for computing points of the form 2kP in affine coordinates. One works for k=2, and the other works for an arbitrary natural number k. The efficiency of these algorithms is based on a trade-off between a field inversion and several field multiplications. Montgomery trick is used to implement this trade-off. Since a field inversion is usually more expensive than 10 field multiplications, the proposed algorithms are efficient in comparison with existing ones.

  • Low Encoding Complexity Video Compression Based on Low-Density Parity Check Codes

    Haruhiko KANEKO  

     
    LETTER-Information Theory

      Page(s):
    340-347

    Conventional video compression methods generally require a large amount of computation in the encoding process because they perform motion estimations. In order to reduce the encoding complexity for video compression, this paper proposes a new video compression method based on low-density parity check codes. The proposed method is suitable for resource-constrained devices such as mobile phones and satellite cameras.

  • QPSK Differential Space Time Coding on Different Unitary Matrices Sets and Initializations

    Jia HOU  Moon Ho LEE  

     
    LETTER-Coding Theory

      Page(s):
    348-353

    This letter investigates a distinct set of complex unitary matrices for differential space time coding by using QPSK modulation. The numerical results show that the properly selection of the initial transmission matrix and the set of unitary matrices can efficiently improve the bit error rate (BER) performance, especially for the antennas correlated fading channel. The computer simulations are evaluated over slow and fast Rayleigh fading channels.

  • A Hill-Shift Learning Algorithm of Hopfield Network for Bipartite Subgraph Problem

    Rong-Long WANG  Kozo OKAZAKI  

     
    LETTER-Neural Networks and Bioengineering

      Page(s):
    354-358

    In this paper, we present a hill-shift learning method of the Hopfield neural network for bipartite subgraph problem. The method uses the Hopfield neural network to get a near-maximum bipartite subgraph, and shifts the local minimum of energy function by adjusts the balance between two terms in the energy function to help the network escape from the state of the near-maximum bipartite subgraph to the state of the maximum bipartite subgraph or better one. A large number of instances are simulated to verify the proposed method with the simulation results showing that the solution quality is superior to that of best existing parallel algorithm.