The expected write deficiency of the index-less indexed flash codes (ILIFC) is studied. ILIFC is a coding scheme for flash memory, and consists of two stages with different coding techniques. This study investigates the write deficiency of the first stage of ILIFC, and shows that omitting the second stage of ILIFC can be a practical option for realizing flash codes with good average performance. To discuss the expected write deficiency of ILIFC, a random walk model is introduced as a formalization of the behavior of ILIFC. Based on the random walk model, two different techniques are developed to estimate the expected write deficiency. One technique requires some computation, but gives very precise estimation of the write deficiency. The other technique gives a closed-form formula of the write deficiency under a certain asymptotic scenario.
The complete subtree (CS) method is one of the most well-known broadcast encryptions which do not enforce the receivers to keep "online." This paper is to reduce the size of secret information which must be stored in a terminal of the method. In the original CS method, the size of the secret information increases as the number of terminals increases. It is shown in this paper that, by making use of a one-way trapdoor permutation, we can make the size constant regardless of the number of terminals. The security of the proposed scheme is investigated, and detailed comparison with other similar schemes is presented. The proposed scheme is suitable for practical implementations of the CS method.
Yuichi KAJIYAMA Naoki HAYASHI Shigemasa TAKAI
This paper proposes a consensus-based subgradient method under a common constraint set with switching undirected graphs. In the proposed method, each agent has a state and an auxiliary variable as the estimates of an optimal solution and accumulated information of past gradients of neighbor agents. We show that the states of all agents asymptotically converge to one of the optimal solutions of the convex optimization problem. The simulation results show that the proposed consensus-based algorithm with accumulated subgradient information achieves faster convergence than the standard subgradient algorithm.
Hisashi MOHRI Ritsuko MATSUMOTO Yuichi KAJI
This study is to investigate new schemes for distributing cryptographic keys in sensor networks. Sharing a key is the very first step to realize secure communication over an untrusted network infrastructure, but commonly used cryptographic techniques cannot be employed for sensor networks due to the restriction of computational resources of sensor nodes. A practical solution to this issue is to predistribute cryptographic keys in sensor nodes before they are deployed. A focal point in this solution is the choice of keys that are assigned to a sensor node. Eschenauer et al. considered to choose keys randomly, and Chan et al. also followed the random choice approach. We consider in this paper a new approach in which keys are assigned according to a basic algebraic geometry. The performance of the proposed scheme is investigated analytically.
Naoki HAYASHI Yuichi KAJIYAMA Shigemasa TAKAI
This paper proposes a distributed algorithm over quantized communication networks for unconstrained optimization with smooth cost functions. We consider a multi-agent system whose local communication is represented by a fixed and connected graph. Each agent updates a state and an auxiliary variable for the estimates of the optimal solution and the average gradient of the entire cost function by a consensus-based optimization algorithm. The state and the auxiliary variable are sent to neighbor agents through a uniform quantizer. We show a convergence rate of the proposed algorithm with respect to the errors between the cost at the time-averaged state and the optimal cost. Numerical examples show that the estimated solution by the proposed quantized algorithm converges to the optimal solution.
Ako SUZUKI Yuichi KAJI Hajime WATANABE
This paper newly formalizes some notions of security for probabilistic public-key encryption schemes. The framework for these notions was originally presented in the work by Bellare et al., in which they consider non-malleability and indistinguishability under chosen-plaintext attack, non-adaptive chosen-ciphertext attack and adaptive chosen-ciphertext attack. This paper extends the results of Bellare et al. by introducing two goals, equivalence undecidability and non-verifiability under the above three attack models. Such goals are sometimes required in electronic voting and bids systems. It is shown that equivalence undecidability, non-verifiability and indistinguishability are all equivalent under the three attack models.
Yuichi KAJI Ryuichi NAKANISHI Hiroyuki SEKI Tadao KASAMI
Parallel multiple context-free grammars (pmcfg's) and multiple context-free grammars (mcfg's) were introduced as extensions of context-free grammars to describe the syntax of natural languages. Pmcfg's and mcfg's deal with tuples of strings, and it has been shown that the universal recognition problem for mcfg's is EXP-POLY time-complete where the universal recognition problem is the problem to decide whether G generates w for a given grammar G and string w. In this paper, the universal recognition problems for the class of pmcfg's and for the subclass of pmcfg's with the information-lossless condition are shown to be EXP-POLY time-complete and PSPACE-complete, respectively. It is also shown that the problems for pmcfg's and for mcfg's with a bounded dimension are both -complete and those for pmcfg's and for mcfg's with a bounded degree are both -complete. As a corollary, the problem for modified head grammars introduced by Vijay-Shanker, et al. to define the syntax of natural languages is shown to be in deterministic polynomial time.
The complete subtree (CS) method is widely accepted for the broadcast encryption. A new method for assigning keys in the CS method is proposed in this paper. The essential idea behind the proposed method is to use two trapdoor permutations. Using the trapdoor information, the key management center computes and assigns a key to each terminal so that the terminal can derive all information necessary in the CS method. A terminal has to keep just one key, while log2 N + 1 keys were needed in the original CS method where N is the number of all terminals. The permutations to be used need to satisfy a certain property which is similar to but slightly different from the claw-free property. The needed property, named strongly semi-claw-free property, is formalized in terms of probabilistic polynomial time algorithm, and its relation to the claw-free property is discussed. It is also shown that if the used permutations fulfill the strongly semi-claw-free property, then the proposed method is secure against attacks of malicious users.
We investigate the minimum weights of simple full-length array LDPC codes (SFA-LDPC codes). The SFA-LDPC codes are a subclass of LDPC codes, and constructed algebraically according to two integer parameters p and j. Mittelholzer and Yang et al. have studied the minimum weights of SFA-LDPC codes, but the exact minimum weights of the codes are not known except for some small p and j. In this paper, we show that the minimum weights of the SFA-LDPC codes with j=4 and j=5 are upper-bounded by 10 and 12, respectively, independent from the prime number p. By combining the results with Yang's lower-bound limits, we can conclude that the minimum weights of the SFA-LDPC codes with j=4 and p>7 are exactly 10 and those of the SFA-LDPC codes with j=5 are 10 or 12.
Yuki TAKEDA Yuichi KAJI Minoru ITO
An information flow problem is a graph-theoretical formalization of the transportation of information over a complicated network. It is known that a linear network code plays an essential role in a certain type of information flow problems, but it is not understood clearly how contributing linear network codes are for other types of information flow problems. One basic problem concerning this aspect is the linear solvability of information flow problems, which is to decide if there is a linear network code that is a solution to the given information flow problem. Lehman et al. characterize the linear solvability of information flow problems in terms of constraints on the sets of source and sink nodes. As an extension of Lehman's investigation, this study introduces a hierarchy constraint of messages, and discusses the computational complexity of the linear solvability of information flow problems with the hierarchy constraints. Nine classes of problems are newly defined, and classified to one of three categories that were discovered by Lehman et al.
A side channel attack is a means of security attacks that tries to restore secret information by analyzing side-information such as electromagnetic wave, heat, electric energy and running time that are unintentionally emitted from a computer system. The side channel attack that focuses on the running time of a cryptosystem is specifically named a “timing attack”. Timing attacks are relatively easy to carry out, and particularly threatening for tiny systems that are used in smart cards and IoT devices because the system is so simple that the processing time would be clearly observed from the outside of the card/device. The threat of timing attacks is especially serious when an attacker actively controls the input to a target program. Countermeasures are studied to deter such active attacks, but the attacker still has the chance to learn something about the concealed information by passively watching the running time of the target program. The risk of passive timing attacks can be measured by the mutual information between the concealed information and the running time. However, the computation of the mutual information is hardly possible except for toy examples. This study focuses on three algorithms for RSA decryption, derives formulas of the mutual information under several assumptions and approximations, and calculates the mutual information numerically for practical security parameters.
An algorithm for encoding low-density parity check (LDPC) codes is investigated. The algorithm computes parity check symbols by solving a set of sparse equations, and the triangular factorization is employed to solve the equations efficiently. It is shown analytically and experimentally that the proposed algorithm is more efficient than the Richardson's encoding algorithm if the code has a small gap.
Kazuhiro TAKADA Yuichi KAJI Tadao KASAMI
Some kind of practical problems such as security verification of cryptographic protocols can be described as a problem to accomplish a given purpose by using limited operations and limited materials only. To model such problems in a natural way, unification problems under constrained substitutions have been proposed. This paper is a collection of results on the decidability and the computational complexity of a syntactic unification problem under constrained substitutions. A number of decidable, undecidable, tractable and intractable results of the problem are presented. Since a unification problem under constrained substitutions can be regarded as an order-sorted unification problem with term declarations such that the number of sorts is only one, the results presented in this paper also indicate how the intractability of order-sorted unification problems is reduced by restecting the number of sorts to one.
Yuichi KAJI Hiroyuki SEKI Tadao KASAMI
Finite state translation systems (fsts') are a widely studied computational model in the area of tree automata theory. In this paper, the string generating capacities of fsts' and their subclasses are studied. First, it is shown that the class of string languages generated by deterministic fsts' equals to that of parallel multiple context-free grammars, which are an extension of context-free grammars. As a corollary, it can be concluded that the recognition problem for a deterministic fsts is solvable in O(ne1)-time, where n is the length of an input word and e is a constant called the degree of the deterministic fsts'. In contrast to the latter fact, it is also shown that nondeterministic monadic fsts' with state-bound 2 can generate an NP-complete language.
A new algorithm for the LogMAP decoding of linear block codes is considered. The decoding complexity is evaluated analytically and by computer simulation. The proposed algorithm is an improvement of the recursive LogMAP algorithm proposed by the authors. The recursive LogMAP algorithm is more efficient than the BCJR algorithm for low-rate codes, but the complexity grows considerably large for high-rate codes. The aim of the proposed algorithm is to solve the complexity explosion of the recursive LogMAP algorithm for high-rate codes. The proposed algorithm is more efficient than the BCJR algorithm for well-known linear block codes.
Yuichi KAJI Ryuichi NAKANISI Hiroyuki SEKI Tadao KASAMI
Multiple context-free grammars (mcfg's) are a subclass of generalized context-free grammars introduced by Pollard in order to describe the syntax of natural languages. First, this paper shows that the universal recognition problem for mcfg's is EXP-POLY time-complete, where the universal recognition problem is the one to decide whether G generates w for a given grammar G and string w. Next, it is shown that the problem for linear context-free rewriting systems introduced by Vijay-Shanker et al., which is a proper subclass of mcfg's, is PSPACE-complete.
Yuichi KAJI Ryujiro SHIBUYA Toru FUJIWARA Tadao KASAMI Shu LIN
New algorithms for the MAP (also known as the APP) decoding and the MAX-LogMAP decoding of linear block codes are presented. The algorithms are devised based on the structural properties of linear block codes, and succeeds in reducing the decoding complexity without degrading the error performance. The proposed algorithms are suitable for the parallel and pipeline processing which improves the throughput of the decoder. To evaluate the decoding complexity of the proposed algorithms, simulation results for some well-known codes are presented. The results show that the algorithms are especially efficient than the conventional BCJR-based algorithms for codes whose rate are relatively low.
Toshinori TAKAI Yuichi KAJI Hiroyuki SEKI
We propose a new decidable subclass of term rewriting systems (TRSs) for which strongly normalizing (SN) property is decidable. The new class is called almost orthogonal inverse finite path overlapping TRSs (AO-FPO-1-TRSs) and the class properly includes AO growing TRSs for which SN is decidable. Tree automata technique is used to show that SN is decidable for AO-FPO-1-TRSs.