Xiaoling YU Yuntao WANG Chungen XU Tsuyoshi TAKAGI
Due to the property of supporting arbitrary operation over the encrypted data, fully homomorphic encryption (FHE) has drawn considerable attention since it appeared. Some FHE schemes have been constructed based on the general approximate common divisor (GACD) problem, which is widely believed intractable. Therefore, studying the GACD problem's hardness can provide proper security parameters for these FHE schemes and their variants. This paper aims to study an orthogonal lattice algorithm introduced by Ding and Tao (Ding-Tao algorithm) to solve the GACD problem. We revisit the condition that Ding-Tao algorithm works and obtain a new bound of the GACD samples' number based on geometric series assumption. Simultaneously, we also give an analysis of the bound given in the previous work. To further verify the theoretical results, we conduct experiments on Ding-Tao algorithm under our bound. We show a comparison with the experimental results under the previous bound, which indicates the success probability under our bound is higher than that of the previous bound with the growth of the bound.
Yutaro KIYOMURA Tsuyoshi TAKAGI
Boneh et al. proposed the new idea of pairing-based cryptography by using the composite order group instead of prime order group. Recently, many cryptographic schemes using pairings of composite order group were proposed. Miller's algorithm is used to compute pairings, and the time of computing the pairings depends on the cost of calculating the Miller loop. As a method of speeding up calculations of the pairings of prime order, the number of iterations of the Miller loop can be reduced by choosing a prime order of low Hamming weight. However, it is difficult to choose a particular composite order that can speed up the pairings of composite order. Kobayashi et al. proposed an efficient algorithm for computing Miller's algorithm by using a window method, called Window Miller's algorithm. We can compute scalar multiplication of points on elliptic curves by using a window hybrid binary-ternary form (w-HBTF). In this paper, we propose a Miller's algorithm that uses w-HBTF to compute Tate pairing efficiently. This algorithm needs a precomputation both of the points on an elliptic curve and rational functions. The proposed algorithm was implemented in Java on a PC and compared with Window Miller's Algorithm in terms of the time and memory needed to make their precomputed tables. We used the supersingular elliptic curve y2=x3+x with embedding degree 2 and a composite order of size of 2048-bit. We denote w as window width. The proposed algorithm with w=6=2·3 was about 12.9% faster than Window Miller's Algorithm with w=2 although the memory size of these algorithms is the same. Moreover, the proposed algorithm with w=162=2·34 was about 12.2% faster than Window Miller's algorithm with w=7.
Motoi YOSHITOMI Tsuyoshi TAKAGI Shinsaku KIYOMOTO Toshiaki TANAKA
Pairing based cryptosystems can accomplish novel security applications such as ID-based cryptosystems, which have not been constructed efficiently without the pairing. The processing speed of the pairing based cryptosystems is relatively slow compared with the other conventional public key cryptosystems. However, several efficient algorithms for computing the pairing have been proposed, namely Duursma-Lee algorithm and its variant ηT pairing. In this paper, we present an efficient implementation of the pairing over some mobilephones. Moreover, we compare the processing speed of the pairing with that of the other standard public key cryptosystems, i.e. RSA cryptosystem and elliptic curve cryptosystem. Indeed the processing speed of our implementation in ARM9 processors on BREW achieves under 100 milliseconds using the supersingular curve over F397. In addition, the pairing is more efficient than the other public key cryptosystems, and the pairing can be achieved enough also on BREW mobilephones. It has become efficient enough to implement security applications, such as short signature, ID-based cryptosystems or broadcast encryption, using the pairing on BREW mobilephones.
We show that under some conditions an attacker can break the public-key cryptosystem proposed by J. Schwenk and J. Eisfeld at Eurocrypt '96 which is based on the difficulty of factoring over the ring Z/nZ [x], even though its security is as intractable as the difficulty of factoring a rational integer. We apply attacks previously reported against RSA-type cryptosystems with a low exponent to the Schwenk-Eisfeld cryptosystem and show a method of breaking the Schwenk-Eisfeld signature with a low exponent.
Rong HU Kirill MOROZOV Tsuyoshi TAKAGI
Code-based public-key encryption schemes (PKE) are the candidates for post-quantum cryptography, since they are believed to resist the attacks using quantum algorithms. The most famous such schemes are the McEliece encryption and the Niederreiter encryption. In this paper, we present the zero-knowledge (ZK) proof systems for proving statements about data encrypted using these schemes. Specifically, we present a proof of plaintext knowledge for both PKE's, and also a verifiable McEliece PKE. The main ingredients of our constructions are the ZK identification schemes by Stern from Crypto'93 and by Jain, Krenn, Pietrzak, and Tentes from Asiacrypt'12.
Naoyuki SHINOHARA Takeshi SHIMOYAMA Takuya HAYASHI Tsuyoshi TAKAGI
The security of pairing-based cryptosystems is determined by the difficulty of solving the discrete logarithm problem (DLP) over certain types of finite fields. One of the most efficient algorithms for computing a pairing is the ηT pairing over supersingular curves on finite fields of characteristic 3. Indeed many high-speed implementations of this pairing have been reported, and it is an attractive candidate for practical deployment of pairing-based cryptosystems. Since the embedding degree of the ηT pairing is 6, we deal with the difficulty of solving a DLP over the finite field GF(36n), where the function field sieve (FFS) is known as the asymptotically fastest algorithm of solving it. Moreover, several efficient algorithms are employed for implementation of the FFS, such as the large prime variation. In this paper, we estimate the time complexity of solving the DLP for the extension degrees n=97, 163, 193, 239, 313, 353, and 509, when we use the improved FFS. To accomplish our aim, we present several new computable estimation formulas to compute the explicit number of special polynomials used in the improved FFS. Our estimation contributes to the evaluation for the key length of pairing-based cryptosystems using the ηT pairing.
Toshiya NAKAJIMA Tetsuya IZU Tsuyoshi TAKAGI
The ηT pairing for supersingular elliptic curves over GF(3m) has been paid attention because of its computational efficiency. Since most computation parts of the ηT pairing are GF(3m) multiplications, it is important to improve the speed of the multiplication when implementing the ηT pairing. In this paper we investigate software implementation of GF(3m) multiplication and propose using irreducible trinomials xm+axk+b over GF(3) such that k is a multiple of w, where w is the bit length of the word of targeted CPU. We call the trinomials "reduction optimal trinomials (ROTs)." ROTs actually exist for several m's and for typical values of w = 16 and 32. We list them for extension degrees m = 97, 167, 193, 239, 317, and 487. These m's are derived from security considerations. Using ROTs, we are able to implement efficient modulo operations (reductions) for GF(3m) multiplication compared with cases in which other types of irreducible trinomials are used (e.g., trinomials with a minimum k for each m). The reason for this is that for cases using ROTs, the number of shift operations on multiple precision data is reduced to less than half compared with cases using other trinomials. Our implementation results show that programs of reduction specialized for ROTs are 20-30% faster on 32-bit CPU and approximately 40% faster on 16-bit CPU compared with programs using irreducible trinomials with general k.
Yasuhiko IKEMATSU Dung Hoang DUONG Albrecht PETZOLDT Tsuyoshi TAKAGI
ZHFE, proposed by Porras et al. at PQCrypto'14, is one of the very few existing multivariate encryption schemes and a very promising candidate for post-quantum cryptosystems. The only one drawback is its slow key generation. At PQCrypto'16, Baena et al. proposed an algorithm to construct the private ZHFE keys, which is much faster than the original algorithm, but still inefficient for practical parameters. Recently, Zhang and Tan proposed another private key generation algorithm, which is very fast but not necessarily able to generate all the private ZHFE keys. In this paper we propose a new efficient algorithm for the private key generation and estimate the number of possible keys generated by all existing private key generation algorithms for the ZHFE scheme. Our algorithm generates as many private ZHFE keys as the original and Baena et al.'s ones and reduces the complexity from O(n2ω+1) by Baena et al. to O(nω+3), where n is the number of variables and ω is a linear algebra constant. Moreover, we also analyze when the decryption of the ZHFE scheme does not work.
The Single Instruction, Multiple Data (SIMD) architecture enables computation in parallel on a single processor. The SIMD operations are implemented on some processors such as Pentium 3/4, Athlon, SPARC, or even on smart cards. This paper proposes efficient algorithms for assembling an elliptic curve addition (ECADD), doubling (ECDBL), and k-iterated ECDBL (k-ECDBL) with SIMD operations. We optimize the number of auxiliary variables and the order of basic field operations used for these addition formulas. If an addition chain has k-bit zero run, we can replace k-time ECDBLs to the proposed faster k-ECDBL and the total efficiency of the scalar multiplication can be improved. Using the singed binary chain, we can compute a scalar multiplication about 10% faster than the previously fastest algorithm proposed by Aoki et al. Combined with the sliding window method or the width-w NAF window method, we also achieve about 10% faster parallelized scalar multiplication algorithms with SIMD operations. For the implementation on smart cards, we establish two fast parallelized scalar multiplication algorithms with SIMD resistant against side channel attacks.
Mingwu ZHANG Tsuyoshi TAKAGI Bo YANG Fagen LI
Strong designated verifier signature scheme (SDVS) allows a verifier to privately check the validity of a signature. Recently, Huang et al. first constructed an identity-based SDVS scheme (HYWS) in a stronger security model with non-interactive proof of knowledge, which holds the security properties of unforgeability, non-transferability, non-delegatability, and privacy of signer's identity. In this paper, we show that their scheme does not provide the claimed properties. Our analysis indicates that HYWS scheme neither resist on the designated verifier signature forgery nor provide simulation indistinguishability, which violates the security properties of unforgeability, non-delegatability and non-transferability.
Masaaki SHIRASE Tsuyoshi TAKAGI Eiji OKAMOTO
Recently Tate pairing and its variations are attracted in cryptography. Their operations consist of a main iteration loop and a final exponentiation. The final exponentiation is necessary for generating a unique value of the bilinear pairing in the extension fields. The speed of the main loop has become fast by the recent improvements, e.g., the Duursma-Lee algorithm and ηT pairing. In this paper we discuss how to enhance the speed of the final exponentiation of the ηT pairing in the extension field F36n. Indeed, we propose some efficient algorithms using the torus T2(F33n) that can efficiently compute an inversion and a powering by 3n + 1. Consequently, the total processing cost of computing the ηT pairing can be reduced by 16% for n=97.
Hisayoshi SATO Tsuyoshi TAKAGI Satoru TEZUKA Kazuo TAKARAGI
This paper investigates some modular powering functions suitable for cryptography. It is well known that the Rabin encryption function is a 4-to-1 mapping and breaking its one-wayness is secure under the factoring assumption. The previously reported encryption schemes using a powering function are variants of either the 4-to-1 mapping or higher n-to-1 mapping, where n > 4. In this paper, we propose an optimized powering function that is a 3-to-1 mapping using a p2q-type modulus. The one-wayness of the proposed powering function is as hard as the infeasibility of the factoring problem. We present an efficient algorithm for computing the decryption for a p2q-type modulus, which requires neither modular inversion nor division. Moreover, we construct new provably secure digital signatures as an application of the optimized functions. In order to achieve provable security in the random oracle model, we usually randomize a message using random hashing or padding. However, we have to compute the randomization again if the randomized message is a non-cubic residue element--it is inefficient for long messages. We propose an algorithm that can deterministically find the unique cubic residue element for a randomly chosen element.
Katsuyuki OKEYA Tsuyoshi TAKAGI Camille VUILLAUME
Elliptic curves offer interesting possibilities for alternative cryptosystems, especially in constrained environments like smartcards. However, cryptographic routines running on such lightweight devices can be attacked with the help of "side channel information"; power consumption, for instance. Elliptic curve cryptosystems are not an exception: if no precaution is taken, power traces can help attackers to reveal secret information stored in tamper-resistant devices. Okeya-Takagi scheme (OT scheme) is an efficient countermeasure against such attacks on elliptic curve cryptosystems, which has the unique feature to allow any size for the pre-computed table: depending on how much memory is available, users can flexibly change the table size to fit their needs. Since the nature of OT scheme is different from other side-channel attack countermeasures, it is necessary to deeply investigate its security. In this paper, we present a comprehensive security analysis of OT scheme, and show that based on information leaked by power consumption traces, attackers can slightly enhance standard attacks. Then, we explain how to prevent such information leakage with simple and efficient modifications.
Camille VUILLAUME Katsuyuki OKEYA Tsuyoshi TAKAGI
Koblitz curves belong to a special class of binary curves on which the scalar multiplication can be computed very efficiently. For this reason, they are suitable candidates for implementations on low-end processors. However, such devices are often vulnerable to side channel attacks. In this paper, we propose a new countermeasure against side channel attacks on Koblitz curves, which utilizes a fixed-pattern recoding to defeat simple power analysis. We show that in practical cases, the recoding can be performed from left to right, and can be easily stored or even randomly generated.
Yuntao WANG Yoshinori AONO Tsuyoshi TAKAGI
The learning with errors (LWE) problem is considered as one of the most compelling candidates as the security base for the post-quantum cryptosystems. For the application of LWE based cryptographic schemes, the concrete parameters are necessary: the length n of secret vector, the moduli q and the deviation σ. In the middle of 2016, Germany TU Darmstadt group initiated the LWE Challenge in order to assess the hardness of LWE problems. There are several approaches to solve the LWE problem via reducing LWE to other lattice problems. Xu et al.'s group solved some LWE Challenge instances using Liu-Nguyen's adapted enumeration technique (reducing LWE to BDD problem) [23] and they published this result at ACNS 2017 [32]. In this paper, at first, we applied the progressive BKZ on the LWE challenge cases of σ/q=0.005 using Kannan's embedding technique. We can intuitively observe that the embedding technique is more efficient with the embedding factor M closer to 1. Then we will analyze the optimal number of samples m for a successful attack on LWE case with secret length of n. Thirdly based on this analysis, we show the practical cost estimations using the precise progressive BKZ simulator. Simultaneously, our experimental results show that for n ≥ 55 and the fixed σ/q=0.005, the embedding technique with progressive BKZ is more efficient than Xu et al.'s implementation of the enumeration algorithm in [32][14]. Moreover, by our parameter setting, we succeed in solving the LWE Challenge over (n,σ/q)=(70, 0.005) using 216.8 seconds (32.73 single core hours).
Erik DAHMEN Katsuyuki OKEYA Tsuyoshi TAKAGI
The most time consuming operation to verify a signature with the Elliptic Curve Digital Signature Algorithm is a multi-scalar multiplication with two scalars. Efficient methods for its computation are the Shamir method and the Interleave method, whereas the performance of those methods can be improved by using general base-2 representations of the scalars. In exchange for the speed-up, those representations require the precomputation of several points that must be stored. In the case of two precomputed points, the Interleave method and the Shamir method provide the same, optimal efficiency. In the case of more precomputed points, only the Interleave method can be sped-up in an optimal way and is currently more efficient than the Shamir method. This paper proposes a new general base-2 representation of the scalars that can be used to speed up the Shamir method. It requires the precomputation of ten points and is more efficient than any other representation that also requires ten precomputed points. Therefore, the proposed method is the first to improve the Shamir method such that it is faster than the Interleave method.
We consider some attacks on multi-prime RSA (MPRSA) with a modulus N = p1p2 . . . pr (r ≥ 3). It is believed that the small private exponent attack on the MPRSA is less effective than that on RSA (see Hinek et al.'s work at SAC 2003), which means smaller private exponents can be used in the MPRSA to speed up the decryption process. Our work shows that even if a private exponent is significantly beyond Hinek et al.'s bound, it still may be insecure if the prime difference Δ (Δ = pr - p1 = Nγ, supposing p1 < p2 < … < pr) is small, i.e. 0 < γ < 1/r. Specifically, by taking full advantage of prime properties, our small private exponent attack reveals that the MPRSA is insecure when $delta<1-sqrt{1+2gamma-3/r}$ (if $gammagerac{3}{2r}-rac{1+delta}{4}$) or $deltale rac{3}{r}-rac{1}{4}-2gamma$ (if $gamma < rac{3}{2r}-rac{1+delta}{4}$), where δ is the exponential of the private exponent d with base N, i.e., d = Nδ. In addition, we present a Fermat-like factoring attack which factors N efficiently when Δ < N1/r2. These proposed attacks surpass previous works (e.g. Bahig et al.'s at ICICS 2012), and are proved effective in practice.
Kaisei KAJITA Go OHTAKE Kazuto OGAWA Koji NUIDA Tsuyoshi TAKAGI
We propose a short signature scheme under the ring-SIS assumption in the standard model. Specifically, by revisiting an existing construction [Ducas and Micciancio, CRYPTO 2014], we demonstrate lattice-based signatures with improved reduction loss. As far as we know, there are no ways to use multiple tags in the signature simulation of security proof in the lattice tag-based signatures. We address the tag-collision possibility in the lattice setting, which improves reduction loss. Our scheme generates tags from messages by constructing a scheme under a mild security condition that is existentially unforgeable against random message attack with auxiliary information. Thus our scheme can reduce the signature size since it does not need to send tags with the signatures. Our scheme has short signature sizes of O(1) and achieves tighter reduction loss than that of Ducas et al.'s scheme. Our proposed scheme has two variants. Our scheme with one property has tighter reduction and the same verification key size of O(log n) as that of Ducas et al.'s scheme, where n is the security parameter. Our scheme with the other property achieves much tighter reduction loss of O(Q/n) and verification key size of O(n), where Q is the number of signing queries.
Rui XU Kirill MOROZOV Tsuyoshi TAKAGI
Harn and Lin proposed an algorithm to detect and identify cheaters in Shamir's secret sharing scheme in the journal Designs, Codes and Cryptography, 2009. In particular, their algorithm for cheater identification is inefficient. We point out that some of their conditions for cheater detection and identification essentially follow from those on error detection/correction of Reed-Solomon codes, which have efficient decoding algorithms, while some other presented conditions turn out to be incorrect. The extended and improved version of the above mentioned scheme was recently presented at the conference International Computer Symposium 2012 (and the journal version appeared in the journal IET Information Security). The new scheme, which is ideal (i.e. the share size is equal to that of the secret), attempts to identify cheaters from minimal number of shares (i.e. the threshold of them). We show that the proposed cheater identification is impossible using the arguments from coding theory.
The security of current public-key cryptosystems relies on the hardness of factoring large integers or solving discrete logarithm problems. However, these mathematical problems can be solved in polynomial time using a quantum computer. This vulnerability has prompted research into post-quantum cryptography using alternative mathematical problems that are secure in the era of quantum computers. In this regard, the National Institute of Standards and Technology (NIST) began to standardize post-quantum cryptography in 2016. In this expository article, we give an overview of recent research on post-quantum cryptography. In particular, we describe the construction and security of multivariate polynomial cryptosystems and lattice-based cryptosystems, which are the main candidates of post-quantum cryptography.