The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.40

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E84-A No.1  (Publication Date:2001/01/01)

    Special Section on the 10th Anniversary of the IEICE Transactions of Fundamentals: "Last Decade and 21st Century"
  • FOREWORD

    Shoji SHINODA  

     
    FOREWORD

      Page(s):
    1-1
  • A History of the English IEICE Transactions

    Shoji SHINODA  

     
    ARTICLE

      Page(s):
    2-6

    A history of the English IEICE Transactions from the beginning is stated through the eyes of the person who has been involved in promoting the Transactions, by a description of why and how it has actually been reformed. The purpose and significance of the English IEICE Transactions, especially of the IEICE Trans. Fundamentals, are clarified.

  • Research Topics and Results on Nonlinear Theory and Its Applications in Japan

    Kiyotaka YAMAMURA  Kazuo HORIUCHI  

     
    INVITED PAPER

      Page(s):
    7-13

    This paper surveys the research topics and results on nonlinear theory and its applications which have been achieved in Japan or by Japanese researchers during the last decade. The paticular emphasis is placed on chaos, neural networks, nonlinear circuit analysis, nonlinear system theory, and numerical methods for solving nonlinear systems.

  • New Vistas to the Signal Processing of Nonstationary Time Series via an Operator Algebraic Way

    Tosiro KOGA  

     
    INVITED PAPER

      Page(s):
    14-30

    This paper is, in half part, written in review nature, and presents recent theoretical results on linear-filtering and -prediction problems of nonstationary Gaussian processes. First, the basic concepts, signal and noise, are mathematically characterized, and information sources are defined by linear stochastic differential equations. Then, it is shown that the solution to a conventional problem of filtering or prediction of a nonstationary time series is, in principle, reducible to a problem, of which solution is given by Kalman-Bucy's theory, if one can solve a problem of finding the canonical representation of a Gaussian process such that it has the same covariance functions as those of the time series under consideration. However, the problem mentioned above is left open. Further, the problem of time-frequency analysis is discussed, and physical realizability of the evolutionary, i.e., the online, spectral analyzer is shown. Methods for dealing with differential operators are presented and their basic properties are clarified. Finally, some of related open problems are proposed.

  • Differential and Algebraic Geometry of Multilayer Perceptrons

    Shun-ichi AMARI  Tomoko OZEKI  

     
    INVITED PAPER

      Page(s):
    31-38

    Information geometry is applied to the manifold of neural networks called multilayer perceptrons. It is important to study a total family of networks as a geometrical manifold, because learning is represented by a trajectory in such a space. The manifold of perceptrons has a rich differential-geometrical structure represented by a Riemannian metric and singularities. An efficient learning method is proposed by using it. The parameter space of perceptrons includes a lot of algebraic singularities, which affect trajectories of learning. Such singularities are studied by using simple models. This poses an interesting problem of statistical inference and learning in hierarchical models including singularities.

  • Principle of Superposition for Realizing Dexterous Pinching Motions of a Pair of Robot Fingers with Soft-Tips

    Suguru ARIMOTO  Pham Thuc Anh NGUYEN  

     
    INVITED PAPER

      Page(s):
    39-47

    This paper is concerned with analysis of nonlinear dynamics under geometric constraints that express pinching motions of a pair of multi-degrees of freedom fingers with soft tips. The dynamics of such a pair of soft fingers can be expressed by a set of complicated nonlinear differential equations with algebraic constraints, even if the motion is constrained in a plane. However, it is shown from the passivity analysis that dynamic stable grasping (pinching) can be realized by means of a feedforward input of desired internal force with coefficients composed of elements of Jacobian matrices plus a feedback of the difference between moments of rotation exerted at both sides of the object. It is shown in the case of a pair of 2 d.o.f. and 3 d.o.f. fingers (corresponding to a pair of thumb and index fingers) that a principle of linear superposition is applicable to design of additional feedback signals for controlling simultaneously the posture (rotational angle) and position of the mass center of the object, though the dynamics are nonlinear. A sufficient condition for applicability of the principle of superposition is discussed and given as a condition for unique stationary resolution of the overall motion to elementary motions (stable grasping, rotation control, x and y coordinates control). The principle implies that a skilled motion can be resolved into some of elementary motions which human can learn separately and independently.

  • Parallel Meta-Heuristics and Autonomous Decentralized Combinatorial Optimization

    Morikazu NAKAMURA  Kenji ONAGA  

     
    INVITED PAPER

      Page(s):
    48-54

    This paper treats meta-heuristics for combinatorial optimization problems. The parallelization of meta-heuristics is then discussed in which we show that parallel processing has possibility of not only speeding up but also improving solution quality. Finally we extend the discussion of the combinatorial optimization into autonomous decentralized systems, say autonomous decentralized optimization. This notion becomes very important with the advancement of the network-connected system architecture.

  • Wireless Past and Future--Evolving Mobile Communications Systems--

    Fumiyuki ADACHI  

     
    INVITED PAPER

      Page(s):
    55-60

    Nowadays, when people colloquially use the word "wireless," they almost always mean a portable telephone. Over the last 10 years, there has been tremendous growth in the mobile communications markets not only in Japan but also worldwide. For these 10 years, the most popular service has been dominated by voice communication. However, modern mobile communications systems are shifting their focus from solely voice communication to electronic mailing and Internet access. From now, we will evolve into a wireless multimedia society, where a combination of mobile communications and the Internet will play an important role. Wireless technology is the core of mobile communications systems. This article, which focuses on wireless technology, looks at how mobile communications systems have evolved over the last 10 years and looks to the future of advanced wireless technologies that will be necessary to realize a true wireless multimedia society in the coming decade.

  • Development of Cryptology in the Nineties

    Hideki IMAI  Junji SHIKATA  

     
    INVITED PAPER

      Page(s):
    61-67

    Modern cryptology was born in the late seventies and developed in the eighties. A decade since 1991 is the period of continuation of the development and new expansion of cryptology. In this paper we survey the development of cryptologic researches in this decade with emphasis on the results in Japan. We also present some future important works and propose the foundation of a public institution for evaluation of information security techniques.

  • Analog Circuit Designs in the Last Decade and Their Trends toward the 21st Century

    Shigetaka TAKAGI  

     
    INVITED PAPER

      Page(s):
    68-79

    This paper reviews analog-circuit researches in the 1990's especially from an academic-side point of view with the aim of pursuing what becomes important in the 21st century. To achieve this aim a large number of articles are surveyed and more than 200 are listed in References.

  • Digital Signal Processing: Progress over the Last Decade and the Challenges Ahead

    Nozomu HAMADA  

     
    INVITED PAPER

      Page(s):
    80-90

    An aspect of the diverse developments of digital signal processing (DSP) over the last decade are summarized. The current progress of some core fields from the widespread fields are treated in this paper. The selected fields are filter design, wavelet theory and filter bank, adaptive signal processing, nonlinear filters, multidimensional signal processing, intelligent signal processing, and digital signal processor. Through the overview of recent research activities, the interdisciplinary character of the DSP should be proved. Some challenging research direction is described in the last section.

  • Towards the System LSI Design Technology

    Hiroto YASUURA  

     
    INVITED PAPER

      Page(s):
    91-97

    System LSI is a new principal product of semiconductor industry and also a key component of Information Technology (IT). Design of a system LSI contains two different characteristics, system design and LSI design. It is keen issue to establish a design methodology of system LSIs in which designers have much freedom on their design from system level to device level and also can control various design parameters to optimize their design. In this paper, considerations on markets of system LSIs and requirements from each application are summarized. Some proposals on new directions of design methodology are also surveyed.

  • A Perspective on Next-Generation Ad Hoc Networks--A Proposal for an Open Community Network--

    Kenichi MASE  Masakazu SENGOKU  Shoji SHINODA  

     
    INVITED PAPER

      Page(s):
    98-106

    The concept of wireless ad hoc networking has unique features in which neither base stations nor wired backbone networks are required and a mobile node can communicate with a partner beyond the transmission range by multihopping. In this paper, innovations and issues in ad hoc network technologies are reviewed. The concept of a general-purpose ad hoc network is identified as a step toward next-generation ad hoc network development. The concept of an open community network is then presented as a vision for general-purpose ad hoc networks. An open community network is a novel information infrastructure for local communities based on wireless multihopping technologies, which may support an advanced information-oriented society in the twenty-first century. As a case study, an experimental system using PHS (Personal Handy Phone System) is described and some research issues for developing an open community network are identified.

  • Special Section on Cryptography and Information Security
  • FOREWORD

    Hiroki SHIZUYA  

     
    FOREWORD

      Page(s):
    107-107
  • Cryptographic Works of Dr. Kenji Koyama: In Memoria

    Noboru KUNIHIRO  Kazuo OHTA  Tatsuaki OKAMOTO  Routo TERADA  Yukio TSURUOKA  

     
    INVITED PAPER

      Page(s):
    108-113

    Dr. Kenji Koyama, one of the most respected and prominent Japanese researchers in modern cryptography, passed away on March 27, 2000. He left behind him many outstanding academic achievements in cryptography as well as other areas such as emotion transmission theory, learning and mathematical games. In this manuscript, with our deepest sympathy and greatest appreciation for his contribution to our society, we introduce his major works mainly in cryptography, although his papers in other areas are included in the bibliography list.

  • Fast Computation over Elliptic Curves E(Fqn) Based on Optimal Addition Sequences

    Yukio TSURUOKA  Kenji KOYAMA  

     
    PAPER

      Page(s):
    114-119

    A fast method for computing a multiple mP for a point P on elliptic curves is proposed. This new method is based on optimal addition sequences and the Frobenius map. The new method can be effectively applied to elliptic curves E(Fqn), where q is a prime power of medium size (e.g., q 128). When we compute mP over curves E(Fqn) with qn of nearly 160-bits and 11 q 128, the new method requires less elliptic curve additions than previously proposed methods. In this case, the average number of elliptic curve additions ranges from 40 to 50.

  • Efficient Scalar Multiplications on Elliptic Curves with Direct Computations of Several Doublings

    Yasuyuki SAKAI  Kouichi SAKURAI  

     
    PAPER

      Page(s):
    120-129

    We introduce efficient algorithms for scalar multiplication on elliptic curves defined over FP. The algorithms compute 2k P directly from P, where P is a random point on an elliptic curve, without computing the intermediate points, which is faster than k repeated doublings. Moreover, we apply the algorithms to scalar multiplication on elliptic curves, and analyze their computational complexity. As a result of their implementation with respect to affine (resp. weighted projective) coordinates, we achieved an increased performance factor of 1.45 (45%) (resp. 1.15 (15%)) in the scalar multiplication of the elliptic curve of size 160-bit.

  • A Fast Jacobian Group Arithmetic Scheme for Algebraic Curve Cryptography

    Ryuichi HARASAWA  Joe SUZUKI  

     
    PAPER

      Page(s):
    130-139

    The goal of this paper is to describe a practical and efficient algorithm for computing in the Jacobian of a large class of algebraic curves over a finite field. For elliptic and hyperelliptic curves, there exists an algorithm for performing Jacobian group arithmetic in O(g2) operations in the base field, where g is the genus of a curve. The main problem in this paper is whether there exists a method to perform the arithmetic in more general curves. Galbraith, Paulus, and Smart proposed an algorithm to complete the arithmetic in O(g2) operations in the base field for the so-called superelliptic curves. We generalize the algorithm to the class of Cab curves, which includes superelliptic curves as a special case. Furthermore, in the case of Cab curves, we show that the proposed algorithm is not just general but more efficient than the previous algorithm as a parameter a in Cab curves grows large.

  • On the Complexity of Constructing an Elliptic Curve of a Given Order

    Masato YAMAMICHI  Masahiro MAMBO  Hiroki SHIZUYA  

     
    PAPER

      Page(s):
    140-145

    Can we find in polynomial time an elliptic curve of a given order over a finite field? This paper is concerned with this question which is open since 1986. Consider the partial multivalued function that outputs such an elliptic curve. We characterize the difficulty of computing this function, and show that the polynomial time hierarchy collapses if sat reduces to this function with respect to the polynomial time Turing reducibility, where sat is the partial multivalued function that on input a Boolean formula, outputs a satisfying assignment. We also give a problem that is equivalent to the open question under the Extended Riemann Hypothesis.

  • Speeding up the Lattice Factoring Method

    Shigenori UCHIYAMA  Naoki KANAYAMA  

     
    PAPER

      Page(s):
    146-150

    Recently, Boneh et al. proposed an interesting algorithm for factoring integers, the so-called LFM (Lattice Factoring Method). It is based on the techniques of Coppersmith and Howgrave-Graham, namely, it cleverly employs the LLL-algorithm. The LFM is for integers of the form N = pr q, and is very effective for large r. That is, it runs in polynomial time in log N when r is on the order of log p. We note that for small r, e.g. N =pq, p2q, it is an exponential time algorithm in log N. In this paper, we propose a method for speeding up the LFM from a practical viewpoint. Also, theoretical considerations and experimental results are provided that show that the proposed algorithm offers shorter runing time than the original LFM.

  • A Way of Making Trapdoor One-Way Functions Trapdoor No-Way

    Eikoh CHIDA  Motoji OHMORI  Hiroki SHIZUYA  

     
    PAPER

      Page(s):
    151-156

    A trapdoor one-way function is an extended version of a zero-way permutation. A zero-way permutation was first introduced by Niemi-Renvall in Asiacrypt'94. In this paper we define the class of functions called no-way functions. This is an extended version of a zero-way permutation. Intuitively, a function f is no-way if, without trapdoor, both computing f and computing f-1 are hard. Li-Chida-Shizuya defined the notion of a no-way function, which is a provable-security version of a zero-way permutation. They also gave an example of a no-way function such that computing f and f-1 is proven to be as hard as breaking the Diffie-Hellman key exchange scheme. We redefine the notion of a trapdoor no-way function more preciously, classify no-way functions by the property of the trapdoor: common, separated and semi-separated trapdoor no-way, give a method for constructing trapdoor no-way functions from trapdoor one-way functions, and also give an example of trapdoor no-way functions.

  • On Lower Bounds for the Communication Complexity of Private Information Retrieval

    Toshiya ITOH  

     
    PAPER

      Page(s):
    157-164

    Private information retrieval for k 1 databases (denoted by (k,l)-PIR for short) is a protocol that (1) a user sends an l tuple query to each of k noncommunicating replicated databases; (2) each database responds the user with an answer corresponding to the l tuple query; (3) the user privately retrieve any single bit out of the n bits of data stored in k databases. In this model, "privacy" implies that the user retrieves the bit he is interested in but releases to each database nothing about which bit he wishes to get. In general, the efficiency of (k,l)-PIR is measured by the total amount of bits exchanged between the user and the k databases, but few about its lower bounds are known except for restricted cases. In this paper, we classify (k,l)-PIR into a linear type, a multilinear type, and an affine type with respect to the relationship between queries to each database (made by the user) and answers to the user (made by each database), and show that (1) the lower bound for the communication complexity of any multilinear type (k,l)-PIR is Ω(n1/(l+1)) (Theorem 3.1); (2) the lower bound for the communication complexity of any linear type (k,l)-PIR is Ω(n) (Corollary 3.2); (3) the lower bound for the communication complexity of any affine type (k,l)-PIR is Ω(n1/(l+1)) (Theorem 4.2).

  • The Decision Diffie-Hellman Assumption and the Quadratic Residuosity Assumption

    Taiichi SAITO  Takeshi KOSHIBA  Akihiro YAMAMURA  

     
    PAPER

      Page(s):
    165-171

    This paper examines similarities between the Decision Diffie-Hellman (DDH) assumption and the Quadratic Residuosity (QR) assumption. In addition, we show that many cryptographic protocols based on the QR assumption can be reconstructed using the DDH assumption.

  • Relations among Security Goals of Probabilistic Public-Key Cryptosystems

    Ako SUZUKI  Yuichi KAJI  Hajime WATANABE  

     
    PAPER

      Page(s):
    172-178

    This paper newly formalizes some notions of security for probabilistic public-key encryption schemes. The framework for these notions was originally presented in the work by Bellare et al., in which they consider non-malleability and indistinguishability under chosen-plaintext attack, non-adaptive chosen-ciphertext attack and adaptive chosen-ciphertext attack. This paper extends the results of Bellare et al. by introducing two goals, equivalence undecidability and non-verifiability under the above three attack models. Such goals are sometimes required in electronic voting and bids systems. It is shown that equivalence undecidability, non-verifiability and indistinguishability are all equivalent under the three attack models.

  • A Chosen-Cipher Secure Encryption Scheme Tightly as Secure as Factoring

    Eiichiro FUJISAKI  Tatsuaki OKAMOTO  

     
    PAPER

      Page(s):
    179-187

    At Eurocrypt'98, Okamoto and Uchiyama presented a new trap-door (one-way) function based on factoring, while Fujisaki and Okamoto, at CRYPTO'99, showed a generic conversion from just one-way encryption to chosen-cipher secure encryption in the random oracle model. This paper shows that the result of combining both schemes is well harmonized (rather than an arbitrary combination) and, in the sense of exact security, boosts the level of security more than would be expected from [6]--The security of the scheme yielded by the combination is tightly reduced from factoring. This paper also gives a rigorous description of the new scheme, because this type of encryption may suffer serious damage if poorly implemented. The proposed scheme is at least as efficient as any other chosen-cipher secure asymmetric encryption scheme such as [2],[4],[13].

  • New Multiplicative Knapsack-Type Public Key Cryptosystems

    Shinya KIUCHI  Yasuyuki MURAKAMI  Masao KASAHARA  

     
    PAPER

      Page(s):
    188-196

    In this paper, first, we propose two of the high rate methods based on Morii-Kasahara cryptosystem. Method A-I is based on Schalkwijk algorithm. Method A-II is based on the extended Schalkwijk algorithm, which is proposed in this paper. We then show that these proposed methods can yield a higher rate compared with ElGamal cryptosystem. Next, we also propose two methods for a fast encryption by dividing the message vector into several pieces. Regarding each of the divided vectors as an index, we can realize a fast transformation of the index into a limited weight vector. In Method B-I, Schalkwijk algorithm is used for the fast transformation. In Method B-II, the fast transformation is realized with the method of table-lookup. These methods can realize a faster encryption than Method A-I, Method A-II and Morii-Kasahara cryptosystem. The security of these proposed methods are based on the security of Morii-Kasahara cryptosystem.

  • A Signature Scheme with Message Recovery as Secure as Discrete Logarithm

    Masayuki ABE  Tatsuaki OKAMOTO  

     
    PAPER

      Page(s):
    197-204

    This paper, for the first time, presents a provably secure signature scheme with message recovery based on the elliptic-curve discrete logarithm. The proposed scheme is proven to be secure in the strongest sense (i.e., existentially unforgeable against adaptively chosen message attacks) in the random oracle model under the discrete logarithm assumption. We give a concrete analysis of the security reduction. When practical hash functions are used in place of truly random functions, the proposed scheme is almost as efficient as the elliptic-curve version of the Schnorr signature scheme and existing schemes with message recovery such as the elliptic-curve version of the Nyberg-Rueppel and Miyaji schemes.

  • A Threshold Digital Signature Scheme for a Smart Card Based System

    Kunihiko MIYAZAKI  Kazuo TAKARAGI  

     
    PAPER

      Page(s):
    205-213

    This paper describes an efficient k-out-of-n threshold digital signature scheme for a smart card based system where a signer uses multiple cards so that the signature can be issued in a dependable manner. The main feature of our method is that it does not require a secret communication path among these cards in the signature issuing protocol, and that it requires low communication and computational complexity. Former is an advantage under the current export control regulation which makes hard to export more than 56-bit cipher techniques, and latter is advantage over so-called robust signature.

  • A Digital Signature Scheme on ID-Based Key-Sharing Infrastructures

    Tsuyoshi NISHIOKA  Goichiro HANAOKA  Hideki IMAI  

     
    PAPER

      Page(s):
    214-221

    ID-based key sharing scheme is one of the important topics in Key management, and the Key Predistiribution System (KPS) is one of the major divisions of such key sharing schemes. In KPS, in order to share a common key between the participants, one of the participants need to simply feed-in his partner's identifier value into their secret-algorithm. In contrast to its such remarkable property and its high contribution to the field of key management for digital signature, it has downsides as well. In this paper, we propose an efficient signature scheme on the KPS infrastructure that can overcome such difficulties that are faced. It is shown that if an ID-based key sharing system belonging to KPS is provided, the new digital signature scheme can be used straightforwardly. Moreover, this signature scheme is proven to be secure if the discrete logarithm is reasonably complex. There already exists other digital signature scheme which are also based on KPS, but they contain inevitable flaws: its verifier is restricted and a tamper resistant module(TRM) is required. Our method resolved these problems. In our signature scheme, it is an ensured fact that, all signatures are authenticated by any entity, which is based on the inherence behavior of key generator and not of some common key. Moreover, TRM is not required in our scheme. In order to describe our new scheme, a new concept of "one-way homomorphism" is introduced.

  • Optimal Unconditionally Secure ID-Based Key Distribution Scheme for Large-Scaled Networks

    Goichiro HANAOKA  Tsuyoshi NISHIOKA  Yuliang ZHENG  Hideki IMAI  

     
    PAPER

      Page(s):
    222-230

    Efficient ID-based key sharing schemes are desired worldwide in order to obtain secure communications on the Internet and other related networks, and Key Pre-distribution System (KPS) is one of the majority of such key sharing schemes. The remarkable property of KPS, is that, user need only input the partner's identifier to the secret KPS-algorithm in order to share a key between them. Although this is just a small part of many advantages KPS has in terms of efficiency, an enormous amount of memory is always required to achieve perfect security. While the conventional KPS methods can establish communication links between any pair of entities in a communication system, in most of the practical communication environment, such as in a broadcast system, not all links will be required. In this article, we achieved a desirable method to remove the unnecessary communication links between any pair of entities in a communication system. In our scheme, required memory size per entity was just proportional to the number of entities of the partner's, while that in conventional KPS, it is proportional to the number of entities of the whole communication system. As an example, if an entity communicates with only 1/r others, the memory requirement is reduced to 1/r of the conventional KPS's. Furthermore, it was proven that the obtained memory size was optimum. Overall, our scheme confirmed greater efficiency to achieve secure communication particularly suited in large-scale networks.

  • On the Security of the Okamoto-Tanaka ID-Based Key Exchange Scheme against Active Attacks

    Seungjoo KIM  Masahiro MAMBO  Takeshi OKAMOTO  Hiroki SHIZUYA  Mitsuru TADA  Dongho WON  

     
    PAPER

      Page(s):
    231-238

    As far as the knowledge of authors, the rigorous security of Okamoto-Tanaka identity-based key exchange scheme was shown in [4] for the first time since its invention. However, the analysis deals with only the passive attack. In this paper, we give several models of active attacks against the scheme and show the rigorous security of the scheme in these models. We prove several relationships among attack models, including that (1) breaking the scheme in one attack model is equivalent to breaking the RSA public-key cryptosystem and (2) breaking the scheme in another attack model is equivalent to breaking the Diffie-Hellman key exchange scheme over Zn. The difference of the complexity stems from the difference of the timing of dishonest party's sending out and receiving messages.

  • A Flexible Method for Masked Sharing of Group Keys

    Jun ANZAI  Natsume MATSUZAKI  Tsutomu MATSUMOTO  

     
    PAPER

      Page(s):
    239-246

    This paper proposes a group key distribution scheme with a user exclusion. The user exclusion is how to distribute an encryption key over a broadcast channel shared by n users so that all but d excluded users can get the group key. In the broadcast channel such as Pay-TV, Internet multicast and a mobile telecommunication for a group, a manager should exclude a dishonest user or an unauthorized terminal as soon as possible to protect the secrecy of the group communication. However, it takes a long time for the user exclusion on a large group, if the distributor distributes the group key to each user except the excluded one. We propose a scheme in which the amount of transmission and the key storage of each user do not depend on the number of users of the group. Moreover, our scheme does not require a fixed and privileged distributor.

  • A Subscriber-Excluding and Traitor-Tracing Broadcast Distribution System

    Maki YOSHIDA  Toru FUJIWARA  

     
    PAPER

      Page(s):
    247-255

    A broadcast distribution system (BDS) is a system for the distribution of digital contents over broadcast channel where the data supplier broadcasts the contents in encrypted form and gives each subscriber a decoder containing a secret decryption key. A traitor is a subscriber who offers the information which allows to decrypt the broadcast. When a pirate decoder is captured, if at least one traitor can be identified from it, a BDS is said to be traitor-tracing. If the data supplier can prevent subscribers from obtaining the contents without recalling their decoders, a BDS is said to be subscriber-excluding. In this paper, we propose an efficient BDS which is both subscriber-excluding and traitor-tracing. We use similar mathematics to a threshold cryptosystem. In the proposed BDS, the maximum number of excluded subscribers reaches the maximum number of traitors in a coalition for which at least one traitor can be identified. We prove that the proposed BDS is secure against ciphertext-only attack if and only if ElGamal cryptosystem is secure against the attack and the discrete logarithm problem is hard. The proposed BDS is the first one which satisfies all the following features: Both subscriber-excluding and traitor-tracing, identifying all the traitors, black box tracing and public key system.

  • On the Practical Secret Sharing Scheme

    Wakaha OGATA  

     
    PAPER

      Page(s):
    256-261

    In this paper, we attempt to construct practical secret sharing schemes, which scheme has smaller share size and can detect cheating with high probability. We define two secure ramp schemes, secure ramp scheme and strongly secure ramp scheme. Then, we propose two constructions of secure ramp scheme. These schemes both have small share size and the cheating can be detected with high probability. So, they are practical secret sharing schemes.

  • An Analytic Construction of the Visual Secret Sharing Scheme for Color Images

    Hiroki KOGA  Mitsugu IWAMOTO  Hirosuke YAMAMOTO  

     
    PAPER

      Page(s):
    262-272

    This paper proposes a new construction of the visual secret sharing scheme for the (n,n)-threshold access structure applicable to color images. The construction uses matrices with n rows that can be identified with homogeneous polynomials of degree n. It is shown that, if we find a set of homogeneous polynomials of degree n satisfying a certain system of simultaneous partial differential equations, we can construct a visual secret sharing scheme for the (n,n)-threshold access structure by using the matrices corresponding to the homogeneous polynomials. The construction is easily extended to the cases of the (t,n)-threshold access structure and more general access structures.

  • An Image Correction Scheme for Video Watermarking Extraction

    Akihiko KUSANAGI  Hideki IMAI  

     
    PAPER

      Page(s):
    273-280

    Watermarking techniques proposed up to now have some measure of robustness against non-geometrical alteration, however, most of them are not so robust against geometrical alteration. As also for video watermarking, small translation, scaling, rotation, affine transformation can be effective attacks on the watermark. In this paper, we propose an image correction scheme for video watermarking extraction. This scheme embeds 'patchwork' into digital video data for detection of positions, and corrects the images attacked by geometrical alteration on the basis of the detected positions. We also show the simulation results of applying proposed scheme to the conventional watermarking technique.

  • Secure Protocol to Construct Electronic Trading

    Shin'ichiro MATSUO  Hikaru MORITA  

     
    PAPER

      Page(s):
    281-288

    As one form of electronic commerce, the scale of online trading in stocks is rapidly growing. Although brokers lie between the customers as trustees in the current market, retrenchment of broker seems inevitable. This paper proposes a protocol that allows trading to proceed with only the market and the customers. We show the required characteristics for this type of trading at first. Next, to fulfil these characteristics, we apply an electronic auction protocol and digital signatures. The result is a trading protocol with security equivalent to that the current trading system.

  • Efficient Sealed-Bid Auction by Using One-Way Functions

    Kunio KOBAYASHI  Hikaru MORITA  Koutarou SUZUKI  Mitsuari HAKUTA  

     
    PAPER

      Page(s):
    289-294

    The need for electronic sealed-bid auction services with quantitative competition is increasing. This paper proposes a new method that combines one-way functions and a bit commitment technique for quantitative competitive sealed-bid auctions. Since each modular exponentiation is replaced with a one-way function, the proposed method's computational time is one forty thousandth that of the former methods and the proposed method suits mass bidder systems.

  • Access Control Model with Provisional Actions

    Michiharu KUDO  Satoshi HADA  

     
    PAPER

      Page(s):
    295-302

    In most access control systems, authorization is specified using binary decisions, "yes" or "no," to the access requests resulting in access being permitted or denied respectively. We argue that emerging Internet applications require that this binary decision be extended to "allow access provided some actions are taken. " We propose the notion of provisional actions that specifies the necessary actions to be performed in addition to the binary decision and introduce an access control model for it. We also provide an administrative model for policy management purpose.

  • On the Randomness of Chambers and Gollmann Keystream Generator

    Fumio SATO  Kaoru KUROSAWA  

     
    PAPER

      Page(s):
    303-310

    NOR self-decimated sequences are attractive for stream ciphers because they have a good statistical property and the hardware construction is very simple. This paper presents an analysis of NOR self-decimation system for any parameter. We first determine the period. Then we show the exact distribution of consecutive two bits and three bits, which are shown to be almost uniform distribution.

  • An Algorithm for Cryptanalysis of Certain Keystream Generators Suitable for High-Speed Software and Hardware Implementations

    Miodrag J. MIHALJEVIC  Marc P. C. FOSSORIER  Hideki IMAI  

     
    PAPER

      Page(s):
    311-318

    An algorithm for cryptanalysis of certain keystream generators is proposed. The developed algorithm has the following two advantages over other reported ones: it is more powerful, and it can be implemented by a high-speed software or a simple hardware suitable for high parallel architectures. The algorithm is based on error-correction of information bits only (of the corresponding binary block code) with a novel method for construction of the parity-checks, and the employed error-correction procedure is an APP based threshold decoding. Experimental and theoretical analyses of the algorithm performance are presented, and its complexity is evaluated. The proposed algorithm is compared with recently proposed improved fast correlation attacks based on convolutional codes and turbo decoding. The underlying principles, performance and complexity are compared, and the gain obtained with the novel approach is pointed out.

  • Security of E2 against Truncated Differential Cryptanalysis

    Shiho MORIAI  Makoto SUGITA  Masayuki KANDA  

     
    PAPER

      Page(s):
    319-325

    This paper evaluates the security of the block cipher E2 against truncated differential cryptanalysis. We show an algorithm to search for effective truncated differentials. The result of the search confirmed that there exist no truncated differentials that lead to possible attacks for E2 with more than 8 rounds. The best attack breaks an 8-round variant of E2 with either IT-Function (the initial transformation) or FT-Function (the final transformation) using 294 chosen plaintexts. We also found the attack which distinguishes a 7-round variant of E2 with IT- and FT-Functions from a random permutation using 291 chosen plaintexts.

  • A New Product-Sum Type Public Key Cryptosystem Based on Reduced Bases

    Daisuke SUZUKI  Yasuyuki MURAKAMI  Ryuichi SAKAI  Masao KASAHARA  

     
    LETTER

      Page(s):
    326-330

    The encryption and the decryption of the product-sum type public key cryptosystems can be performed extremely fast. However, when the density is low, the cryptosystem should be broken by the low-density attack. In this paper, we propose a new class of the product-sum type public key cryptosystems based on the reduced bases, which is invulnerable to the low-density attack.

  • Regular Section
  • A Statistical Estimation Method of Optimal Software Release Timing Applying Auto-Regressive Models

    Tadashi DOHI  Hiromichi MORISHITA  Shunji OSAKI  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Page(s):
    331-338

    This paper proposes a statistical method to estimate the optimal software release time which minimizes the expected total software cost incurred in both testing and operation phases. It is shown that the underlying cost minimization problem can be reduced to a graphical one. This implies that the software release problem under consideration is essentially equivalent to a time series forecasting for the software fault-occurrence time data. In order to predict the future fault-occurrence time, we apply three extraordinary auto-regressive models by Singpurwalla and Soyer (1985) as the prediction devices as well as the well-known AR and ARIMA models. Numerical examples are devoted to illustrate the predictive performance for the proposed method. We compare it with the classical exponential software reliability growth model based on the non-homogeneous Poisson process, using actual software fault-occurrence time data.

  • Novel Gm-C Realizations of nth-Order Filters

    Cheng-Chung HSU  Wu-Shiung FENG  

     
    PAPER-Analog Signal Processing

      Page(s):
    339-346

    This work describes a novel structural design for nth-order (n2) transconductance and grounded capacitor (Gm-C) filters with multiple loop feedback technology. The proposed design largely focuses on elucidating the relationship between the Gm-C filter structure and the feedback matrix, allowing us to systematically generate and formulate general design equations. The proposed design allows many new interesting filter configurations to be produced alongside some known structures. Furthermore, numerical design examples and simulation results verify the feasibility of the Gm-C filter approach.

  • Reverse Link Capacity of a Wireless Multimedia CDMA System with Power and Processing Gain Control in Conjunction with CCI Cancellers

    Nasser HAMAD  Takeshi HASHIMOTO  

     
    PAPER-Communication Theory and Signals

      Page(s):
    347-355

    System capacity of a system consisting of N classes of users is characterized by N-vectors representing the number of users that can be accommodated under a specified BER (bit error rate) constraint in each class. In this paper, system capacity of the reverse link of a wireless multimedia CDMA system with processing gain control is analyzed in the asymptotic regime, when the processing gain G, for receivers with and without CCI cancellers. A new scheme for processing gain control with an optimized power allocation is proposed and its system capacity is compared with the conventional processing gain control scheme as well as the previously discussed power control scheme. It is shown that the proposed scheme has a certain advantage over other schemes.

  • Morse Code Recognition Using Learning Vector Quantization for Persons with Physical Disabilities

    Cheng-Hong YANG  

     
    PAPER-Neural Networks and Bioengineering

      Page(s):
    356-362

    For physically disabled persons, the conventional computer keyboard is insufficient as a useable communication device. In this paper, Morse code is selected as a communication adaptive device for persons with impaired hand coordination and dexterity. Morse code is composed of a series of dots, dashes, and space intervals. Each element is transmitted by sending a signal for a defined length of time. Maintaining a stable typing rate by the disabled is difficult. To solve this problem, a suitable adaptive automatic recognition method, which combines a variable degree variable step size LMS algorithm with a learning vector quantization method, was applied to this problem in the present study. The method presented here is divided into five stages: space recognition, tone recognition, learning process, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method elicited a better recognition rate in comparison to alternative methods in the literature.

  • Robust Kalman Filtering with Variable Forgetting Factor against Impulsive Noise

    Han-Su KIM  Jun-Seok LIM  SeongJoon BAEK  Koeng-Mo SUNG  

     
    LETTER-Digital Signal Processing

      Page(s):
    363-366

    In this letter, we propose a robust adaptive filter with a Variable Forgetting Factor (VFF) for impulsive noise suppression. The proposed method can restrict the perturbation of the parameters using M-estimator and adaptively reduce the error propagation caused by the impulsive noise using VFF. Simulations show that the performance of the proposed algorithm is less vulnerable to the impulsive noise than those of the conventional Kalman filter based algorithms.

  • Acceleration Techniques for Synthesis and Analysis of Time-Domain Models of Interconnects Using FDTD Method

    Takayuki WATANABE  Hideki ASAI  

     
    LETTER-Circuit Theory

      Page(s):
    367-371

    This report describes an acceleration technique to synthesize time-domain macromodels of interconnects using FDTD method. In FDTD calculation, the characteristic impedance of the interconnect is inserted into every terminal in order to damp quickly the transient waveforms. Additionally, an efficient technique for analyzing the macromodels is proposed. We demonstrate the efficiency of this method with examples.

  • An Efficient Implementation Method of a Metric Computation Accelerator for Fractal Image Compression Using Reconfigurable Hardware

    Hidehisa NAGANO  Akihiro MATSUURA  Akira NAGOYA  

     
    LETTER-VLSI Design Technology and CAD

      Page(s):
    372-377

    This paper proposes a method for implementing a metric computation accelerator for fractal image compression using reconfigurable hardware. The most time-consuming part in the encoding of this compression is computation of metrics among image blocks. In our method, each processing element (PE) configured for an image block accelerates these computations by pipeline processing. Furthermore, by configuring the PE for a specific image block, we can reduce the number of adders, which are the main computing elements, by a half even in the worst case.

  • A New Approach to 3-D Profilometry for the White Light Interferometric (WLI)

    Seok-Moon RYOO  Tae-Sun CHOI  

     
    LETTER-Image

      Page(s):
    378-382

    A new approach to 3-D profilometry for the white light interferometric (WLI) is presented. The proposed method is the extended depth from focus (EDFF) that determine the zero optical path difference (OPD) from the quantity of fringe contrast degradation of white light interferometer. In the method, the variance of the mismatch function and the modified local variance function are used as the focus measures. The method has a theoretically unlimited range and can profile with subpixel accuracy both optically rough and smooth surfaces without changing algorithm.