Zhenhai TAN Yun YANG Xiaoman WANG Fayez ALQAHTANI
Chenrui CHANG Tongwei LU Feng YAO
Takuma TSUCHIDA Rikuho MIYATA Hironori WASHIZAKI Kensuke SUMOTO Nobukazu YOSHIOKA Yoshiaki FUKAZAWA
Shoichi HIROSE Kazuhiko MINEMATSU
Toshimitsu USHIO
Yuta FUKUDA Kota YOSHIDA Takeshi FUJINO
Qingping YU Yuan SUN You ZHANG Longye WANG Xingwang LI
Qiuyu XU Kanghui ZHAO Tao LU Zhongyuan WANG Ruimin HU
Lei Zhang Xi-Lin Guo Guang Han Di-Hui Zeng
Meng HUANG Honglei WEI
Yang LIU Jialong WEI Shujian ZHAO Wenhua XIE Niankuan CHEN Jie LI Xin CHEN Kaixuan YANG Yongwei LI Zhen ZHAO
Ngoc-Son DUONG Lan-Nhi VU THI Sinh-Cong LAM Phuong-Dung CHU THI Thai-Mai DINH THI
Lan XIE Qiang WANG Yongqiang JI Yu GU Gaozheng XU Zheng ZHU Yuxing WANG Yuwei LI
Jihui LIU Hui ZHANG Wei SU Rong LUO
Shota NAKAYAMA Koichi KOBAYASHI Yuh YAMASHITA
Wataru NAKAMURA Kenta TAKAHASHI
Chunfeng FU Renjie JIN Longjiang QU Zijian ZHOU
Masaki KOBAYASHI
Shinichi NISHIZAWA Masahiro MATSUDA Shinji KIMURA
Keisuke FUKADA Tatsuhiko SHIRAI Nozomu TOGAWA
Yuta NAGAHAMA Tetsuya MANABE
Baoxian Wang Ze Gao Hongbin Xu Shoupeng Qin Zhao Tan Xuchao Shi
Maki TSUKAHARA Yusaku HARADA Haruka HIRATA Daiki MIYAHARA Yang LI Yuko HARA-AZUMI Kazuo SAKIYAMA
Guijie LIN Jianxiao XIE Zejun ZHANG
Hiroki FURUE Yasuhiko IKEMATSU
Longye WANG Lingguo KONG Xiaoli ZENG Qingping YU
Ayaka FUJITA Mashiho MUKAIDA Tadahiro AZETSU Noriaki SUETAKE
Xingan SHA Masao YANAGISAWA Youhua SHI
Jiqian XU Lijin FANG Qiankun ZHAO Yingcai WAN Yue GAO Huaizhen WANG
Sei TAKANO Mitsuji MUNEYASU Soh YOSHIDA Akira ASANO Nanae DEWAKE Nobuo YOSHINARI Keiichi UCHIDA
Kohei DOI Takeshi SUGAWARA
Yuta FUKUDA Kota YOSHIDA Takeshi FUJINO
Mingjie LIU Chunyang WANG Jian GONG Ming TAN Changlin ZHOU
Hironori UCHIKAWA Manabu HAGIWARA
Atsuko MIYAJI Tatsuhiro YAMATSUKI Tomoka TAKAHASHI Ping-Lun WANG Tomoaki MIMOTO
Kazuya TANIGUCHI Satoshi TAYU Atsushi TAKAHASHI Mathieu MOLONGO Makoto MINAMI Katsuya NISHIOKA
Masayuki SHIMODA Atsushi TAKAHASHI
Yuya Ichikawa Naoko Misawa Chihiro Matsui Ken Takeuchi
Katsutoshi OTSUKA Kazuhito ITO
Rei UEDA Tsunato NAKAI Kota YOSHIDA Takeshi FUJINO
Motonari OHTSUKA Takahiro ISHIMARU Yuta TSUKIE Shingo KUKITA Kohtaro WATANABE
Iori KODAMA Tetsuya KOJIMA
Yusuke MATSUOKA
Yosuke SUGIURA Ryota NOGUCHI Tetsuya SHIMAMURA
Tadashi WADAYAMA Ayano NAKAI-KASAI
Li Cheng Huaixing Wang
Beining ZHANG Xile ZHANG Qin WANG Guan GUI Lin SHAN
Sicheng LIU Kaiyu WANG Haichuan YANG Tao ZHENG Zhenyu LEI Meng JIA Shangce GAO
Kun ZHOU Zejun ZHANG Xu TANG Wen XU Jianxiao XIE Changbing TANG
Soh YOSHIDA Nozomi YATOH Mitsuji MUNEYASU
Ryo YOSHIDA Soh YOSHIDA Mitsuji MUNEYASU
Nichika YUGE Hiroyuki ISHIHARA Morikazu NAKAMURA Takayuki NAKACHI
Ling ZHU Takayuki NAKACHI Bai ZHANG Yitu WANG
Toshiyuki MIYAMOTO Hiroki AKAMATSU
Yanchao LIU Xina CHENG Takeshi IKENAGA
Kengo HASHIMOTO Ken-ichi IWATA
Shota TOYOOKA Yoshinobu KAJIKAWA
Kyohei SUDO Keisuke HARA Masayuki TEZUKA Yusuke YOSHIDA
Hiroshi FUJISAKI
Tota SUKO Manabu KOBAYASHI
Akira KAMATSUKA Koki KAZAMA Takahiro YOSHIDA
Tingyuan NIE Jingjing NIE Kun ZHAO
Xinyu TIAN Hongyu HAN Limengnan ZHOU Hanzhou WU
Shibo DONG Haotian LI Yifei YANG Jiatianyi YU Zhenyu LEI Shangce GAO
Kengo NAKATA Daisuke MIYASHITA Jun DEGUCHI Ryuichi FUJIMOTO
Jie REN Minglin LIU Lisheng LI Shuai LI Mu FANG Wenbin LIU Yang LIU Haidong YU Shidong ZHANG
Ken NAKAMURA Takayuki NOZAKI
Yun LIANG Degui YAO Yang GAO Kaihua JIANG
Guanqun SHEN Kaikai CHI Osama ALFARRAJ Amr TOLBA
Zewei HE Zixuan CHEN Guizhong FU Yangming ZHENG Zhe-Ming LU
Bowen ZHANG Chang ZHANG Di YAO Xin ZHANG
Zhihao LI Ruihu LI Chaofeng GUAN Liangdong LU Hao SONG Qiang FU
Kenji UEHARA Kunihiko HIRAISHI
David CLARINO Shohei KURODA Shigeru YAMASHITA
Qi QI Zi TENG Hongmei HUO Ming XU Bing BAI
Ling Wang Zhongqiang Luo
Zongxiang YI Qiuxia XU
Donghoon CHANG Deukjo HONG Jinkeon KANG
Xiaowu LI Wei CUI Runxin LI Lianyin JIA Jinguo YOU
Zhang HUAGUO Xu WENJIE Li LIANGLIANG Liao HONGSHU
Seonkyu KIM Myoungsu SHIN Hanbeom SHIN Insung KIM Sunyeop KIM Donggeun KWON Deukjo HONG Jaechul SUNG Seokhie HONG
Manabu HAGIWARA
Katsuhisa YAMANAKA Shin-ichi NAKANO
A ladder lottery, known as the “Amidakuji” in Japan, is a network with n vertical lines and many horizontal lines each of which connects two consecutive vertical lines. Each ladder lottery corresponds to a permutation. Ladder lotteries are frequently used as natural models in many areas. Given a permutation π, an algorithm to enumerate all ladder lotteries of π with the minimum number of horizontal lines is known. In this paper, given a permutation π and an integer k, we design an algorithm to enumerate all ladder lotteries of π with exactly k horizontal lines.
Yuma INOUE Takahisa TODA Shin-ichi MINATO
Pattern-avoiding permutations are permutations where none of the subsequences matches the relative order of a given pattern. Pattern-avoiding permutations are related to practical and abstract mathematical problems and can provide simple representations for such problems. For example, some floorplans, which are used for optimizing very-large-scale integration (VLSI) circuit design, can be encoded into pattern-avoiding permutations. The generation of pattern-avoiding permutations is an important topic in efficient VLSI design and mathematical analysis of patten-avoiding permutations. In this paper, we present an algorithm for generating pattern-avoiding permutations, and extend this algorithm beyond classical patterns to generalized patterns with more restrictions. Our approach is based on the data structure πDDs, which can represent a permutation set compactly and has useful set operations. We demonstrate the efficiency of our algorithm by computational experiments.
Kung-Jui PAI Jou-Ming CHANG Yue-Li WANG Ro-Yu WU
A queue layout of a graph G consists of a linear order of its vertices, and a partition of its edges into queues, such that no two edges in the same queue are nested. The queuenumber qn(G) is the minimum number of queues required in a queue layout of G. The Cartesian product of two graphs G1 = (V1,E1) and G2 = (V2,E2), denoted by G1 × G2, is the graph with {<v1,v2>:v1 ∈ V1 and v2 ∈ V2} as its vertex set and an edge (<u1,u2>,<v1,v2>) belongs to G1×G2 if and only if either (u1,v1) ∈ E1 and u2 = v2 or (u2,v2) ∈ E2 and u1 = v1. Let Tk1,k2,...,kn denote the n-dimensional toroidal grid defined by the Cartesian product of n cycles with varied lengths, i.e., Tk1,k2,...,kn = Ck1 × Ck2 × … × Ckn, where Cki is a cycle of length ki ≥ 3. If k1 = k2 = … = kn = k, the graph is also called the k-ary n-cube and is denoted by Qnk. In this paper, we deal with queue layouts of toroidal grids and show the following bound: qn(Tk1,k2,...,kn) ≤ 2n-2 if n ≥ 2 and ki ≥ 3 for all i = 1,2,...,n. In particular, for n = 2 and k1,k2 ≥ 3, we acquire qn(Tk1,k2) = 2. Recently, Pai et al. (Inform. Process. Lett. 110 (2009) pp.50-56) showed that qn(Qnk) ≤ 2n-1 if n ≥1 and k ≥9. Thus, our result improves the bound of qn(Qnk) when n ≥2 and k ≥9.
Wen-Yin HUANG Jia-Jie LIU Jou-Ming CHANG Ro-Yu WU
An n-dimensional folded hypercube, denoted by FQn, is an enhanced n-dimensional hypercube with one extra link between nodes that have the furthest Hamming distance. Let FFv (respectively, FFe) denote the set of faulty nodes (respectively, faulty links) in FQn. Under the assumption that every fault-free node in FQn is incident to at least two fault-free links, Hsieh et al. (Inform. Process. Lett. 110 (2009) pp.41-53) showed that if |FFv|+|FFe| ≤ 2n-4 for n ≥ 3, then FQn-FFv-FFe contains a fault-free cycle of length at least 2n-2|FFv|. In this paper, we show that, under the same conditional fault model, FQn with n ≥ 5 can tolerate more faulty elements and provides the same lower bound of the length of a longest fault-free cycle, i.e., FQn-FFv-FFe contains a fault-free cycle of length at least 2n-2|FFv| if |FFv|+|FFe| ≤ 2n-3 for n ≥ 5.
Tamaki NAKAJIMA Yuuki TANAKA Toru ARAKI
A twin dominating set of a digraph D is a subset S of vertices if, for every vertex u ∉ S, there are vertices x,y ∈ S such that ux and yu are arcs of D. A digraph D is round if the vertices can be labeled as v0,v1,...,vn-1 so that, for each vertex vi, the out-neighbors of vi appear consecutively following vi and the in-neighbors of vi appear consecutively preceding vi. In this paper, we give polynomial time algorithms for finding a minimum weight twin dominating set and a minimum weight total twin dominating set for a weighted round digraph. Then we show that there is a polynomial time algorithm for deciding whether a locally semicomplete digraph has an independent twin dominating set. The class of locally semicomplete digraphs contains round digraphs as a special case.
Hiroshi FUJIWARA Yasuhiro KONNO Toshihiro FUJITO
The multislope ski-rental problem is an extension of the classical ski-rental problem, where the player has several options of paying both of a per-time fee and an initial fee, in addition to pure renting and buying options. Damaschke gave a lower bound of 3.62 on the competitive ratio for the case where arbitrary number of options can be offered. In this paper we propose a scheme that for the number of options given as an input, provides a lower bound on the competitive ratio, by extending the method of Damaschke. This is the first to establish a lower bound for each of the 5-or-more-option cases, for example, a lower bound of 2.95 for the 5-option case, 3.08 for the 6-option case, and 3.18 for the 7-option case. Moreover, it turns out that our lower bounds for the 3- and 4-option cases respectively coincide with the known upper bounds. We therefore conjecture that our scheme in general derives a matching lower and upper bound.
Zachary ABEL Erik D. DEMAINE Martin L. DEMAINE Takashi HORIYAMA Ryuhei UEHARA
We prove NP-completeness of deciding whether a given loop of colored right isosceles triangles, hinged together at edges, can be folded into a specified rectangular three-color pattern. By contrast, the same problem becomes polynomially solvable with one color or when the target shape is a tree-shaped polyomino.
Erik D. DEMAINE Yoshio OKAMOTO Ryuhei UEHARA Yushi UNO
Shakashaka is a pencil-and-paper puzzle proposed by Guten and popularized by the Japanese publisher Nikoli (like Sudoku). We determine the computational complexity by proving that Shakashaka is NP-complete, and furthermore that counting the number of solutions is #P-complete. Next we formulate Shakashaka as an integer-programming (IP) problem, and show that an IP solver can solve every instance from Nikoli's website within a second.
Jinhee CHUN Akiyoshi SHIOURA Truong MINH TIEN Takeshi TOKUYAMA
We give a unified view to greedy geometric routing algorithms in ad hoc networks. For this, we first present a general form of greedy routing algorithm using a class of objective functions which are invariant under congruent transformations of a point set. We show that several known greedy routing algorithms such as Greedy Routing, Compass Routing, and Midpoint Routing can be regarded as special cases of the generalized greedy routing algorithm. In addition, inspired by the unified view of greedy routing, we propose three new greedy routing algorithms. We then derive a sufficient condition for our generalized greedy routing algorithm to guarantee packet delivery on every Delaunay graph. This condition makes it easier to check whether a given routing algorithm guarantees packet delivery, and it is closed under convex linear combination of objective functions. It is shown that Greedy Routing, Midpoint Routing, and the three new greedy routing algorithms proposed in this paper satisfy the sufficient condition, i.e., they guarantee packet delivery on Delaunay graphs. We also discuss merits and demerits of these methods.
In holographic data storage, information is recorded within the volume of a holographic medium. Typically, the data is presented as an array of pixels with modulation in amplitude and/or phase. In the 4-f orientation, the Fourier domain representation of the data array is produced optically, and this image is recorded. If the Fourier image contains large peaks, the recording material can saturate, which leads to errors in the read-out data array. In this paper, we present a coding process that produces sparse ternary data arrays. Ternary modulation is used because it inherently provides Fourier domain smoothing and allows more data to be stored per array in comparison to binary modulation. Sparse arrays contain fewer on-pixels than dense arrays, and thus contain less power overall, which reduces the severity of peaks in the Fourier domain. The coding process first converts binary data to a sequence of ternary symbols via a high-rate block code, and then uses guided scrambling to produce a set of candidate codewords, from which the most sparse is selected to complete the encoding process. Our analysis of the guided scrambling division and selection processes demonstrates that, with primitive scrambling polynomials, a sparsity greater than 1/3 is guaranteed for all encoded arrays, and that the probability of this worst-case sparsity decreases with increasing block size.
Kazuto OGAWA Go OHTAKE Arisa FUJII Goichiro HANAOKA
For the sake of privacy preservation, services that are offered with reference to individual user preferences should do so with a sufficient degree of anonymity. We surveyed various tools that meet requirements of such services and decided that group signature schemes with weakened anonymity (without unlinkability) are adequate. Then, we investigated a theoretical gap between unlinkability of group signature schemes and their other requirements. We show that this gap is significantly large. Specifically, we clarify that if unlinkability can be achieved from any other property of group signature schemes, it becomes possible to construct a chosen-ciphertext secure cryptosystem from any one-way function. This result implies that the efficiency of group signature schemes can be drastically improved if unlinkability is not taken into account. We also demonstrate a way to construct a scheme without unlinkability that is significantly more efficient than the best known full-fledged scheme.
Atsushi TAKAYASU Noboru KUNIHIRO
At CaLC 2001, Howgrave-Graham proposed the polynomial time algorithm for solving univariate linear equations modulo an unknown divisor of a known composite integer, the so-called partially approximate common divisor problem. So far, two forms of multivariate generalizations of the problem have been considered in the context of cryptanalysis. The first is simultaneous modular univariate linear equations, whose polynomial time algorithm was proposed at ANTS 2012 by Cohn and Heninger. The second is modular multivariate linear equations, whose polynomial time algorithm was proposed at Asiacrypt 2008 by Herrmann and May. Both algorithms cover Howgrave-Graham's algorithm for univariate cases. On the other hand, both multivariate problems also become identical to Howgrave-Graham's problem in the asymptotic cases of root bounds. However, former algorithms do not cover Howgrave-Graham's algorithm in such cases. In this paper, we introduce the strategy for natural algorithm constructions that take into account the sizes of the root bounds. We work out the selection of polynomials in constructing lattices. Our algorithms are superior to all known attacks that solve the multivariate equations and can generalize to the case of arbitrary number of variables. Our algorithms achieve better cryptanalytic bounds for some applications that relate to RSA cryptosystems.
Noboru KUNIHIRO Naoyuki SHINOHARA Tetsuya IZU
We discuss how to recover RSA secret keys from noisy key bits with erasures and errors. There are two known algorithms recovering original secret keys from noisy keys. At Crypto 2009, Heninger and Shacham proposed a method for the case where an erroneous version of secret keys contains only erasures. Subsequently, Henecka et al. proposed a method for an erroneous version containing only errors at Crypto 2010. For physical attacks such as side-channel and cold boot attacks, we need to study key recovery from a noisy secret key containing both erasures and errors. In this paper, we propose a method to recover a secret key from such an erroneous version and analyze the condition for error and erasure rates so that our algorithm succeeds in finding the correct secret key in polynomial time. We also evaluate a theoretical bound to recover the secret key and discuss to what extent our algorithm achieves this bound.
Noboru KUNIHIRO Naoyuki SHINOHARA Tetsuya IZU
In this paper, we present a lattice based method on small secret exponent attack on the RSA scheme. Boneh and Durfee reduced the attack to finding the small roots of the bivariate modular equation: x(N+1+y)+1 ≡ 0 (mod e), where N is an RSA modulus and e is the RSA public key and proposed a lattice based algorithm for solving the problem. When the secret exponent d is less than N0.292, their method breaks the RSA scheme. Since the lattice used in the analysis is not full-rank, the analysis is not easy. Blömer and May proposed an alternative algorithm that uses a full-rank lattice, even though it gives a bound (d≤N0.290) that is worse than Boneh-Durfee. However, the proof for their bound is still complicated. Herrmann and May, however, have given an elementary proof for the Boneh-Durfee's bound: d≤N0.292. In this paper, we first give an elementary proof for achieving Blömer-May's bound: d≤N0.290. Our proof employs the unravelled linearization technique introduced by Herrmann and May and is rather simpler than that of Blömer-May's proof. We then provide a unified framework — which subsumes the two previous methods, the Herrmann-May and the Blömer-May methods, as a special case — for constructing a lattice that can be are used to solve the problem. In addition, we prove that Boneh-Durfee's bound: d≤N0.292 is still optimal in our unified framework.
Lossy identification schemes are used to construct tightly secure signature schemes via the Fiat-Shamir heuristic in the random oracle model. Several lossy identification schemes are instantiated by using the short discrete logarithm assumption, the ring-LWE assumption and the subset sum assumption, respectively. For assumptions concerning the integer factoring, Abdalla, Ben Hamouda and Pointcheval [3] recently presented lossy identification schemes based on the φ-hiding assumption, the QR assumption and the DCR assumption, respectively. In this paper, we propose new instantiations of lossy identification schemes. We first construct a variant of the Schnorr's identification scheme, and show its lossiness under the subgroup decision assumption. We also construct a lossy identification scheme which is based on the DCR assumption. Our DCR-based scheme has an advantage relative to the ABP's DCR-based scheme since our scheme needs no modular exponentiation in the response phase. Therefore our scheme is suitable when it is transformed to an online/offline signature.
Atsushi FUJIOKA Taiichi SAITO Keita XAGAWA
This paper proposes a generic construction of hierarchical identity-based identification (HIBI) protocols secure against impersonation under active and concurrent attacks in the standard model. The proposed construction converts a digital signature scheme existentially unforgeable against chosen message attacks, where the scheme has a protocol for showing possession of a signing key, not a signature. Our construction is based on the so-called certificate-based construction of hierarchical identity-based cryptosystems, and utilizes a variant of the well-known OR-proof technique to ensure the security against impersonation under active and concurrent attacks. We also present several concrete examples of our construction employing the Waters signature (EUROCRYPT 2005), and other signatures. As results, its concurrent security of each instantiation is proved under the computational Diffie-Hellman (CDH) assumption, the RSA assumption, or their variants in the standard model. Chin, Heng, and Goi proposed an HIBI protocol passively and concurrently secure under the CDH and one-more CDH assumption, respectively (FGIT-SecTech 2009). However, its security is proved in the random oracle model.
Atsushi FUJIOKA Eiichiro FUJISAKI Keita XAGAWA
We study non-malleability of multiple public-key encryption (ME) schemes. The main difference of ME from the threshold public-key encryption schemes is that there is no dealer to share a secret among users; each user can independently choose their own public-keys; and a sender can encrypt a message under ad-hoc multiple public keys of his choice. In this paper we tackle non-malleability of ME. We note that the prior works only consider confidentiality of messages and treat the case that all public keys are chosen by honest users. In the multiple public-key setting, however, some application naturally requires non-malleability of ciphertexts under multiple public keys including malicious users'. Therefore, we study the case and have obtained the following results:
·We present three definitions of non-malleability of ME, simulation-based, comparison-based, and indistinguishability-based ones. These definitions can be seen as an analogue of those of non-malleable public-key encryption (PKE) schemes. Interestingly, our definitions are all equivalent even for the “invalid-allowing” relations. We note that the counterparts of PKE are not equivalent for the relations.
·The previous strongest security notion for ME, “indistinguishability against strong chosen-ciphertext attacks (sMCCA)” [1], does not imply our notion of non-malleability against chosen-plaintext attacks.
·Non-malleability of ME guarantees that the single message indistinguishability-based notion is equivalent to the multiple-message simulation-based notion, which provides designers a fundamental benefit.
·We define new, stronger decryption robustness for ME. A non-malleable ME scheme is meaningful in practice if it also has the decryption robustness.
·We present a constant ciphertext-size ME scheme (meaning that the length of a ciphertext is independent of the number of public-keys) that is secure in our strongest security notion of non-malleability. Indeed, the ciphertext overhead (i.e., the length of a ciphertext minus that of a plaintext) is the combined length of two group elements plus one hash value, regardless of the number of public keys. Then, the length of the partial decryption of one user consists of only two group elements, regardless of the length of the plaintext.
ID-based authenticated key exchange (ID-AKE) is a cryptographic tool to establish a common session key between parties with authentication based on their IDs. If IDs contain some hierarchical structure such as an e-mail address, hierarchical ID-AKE (HID-AKE) is especially suitable because of scalability. However, most of existing HID-AKE schemes do not satisfy advanced security properties such as forward secrecy, and the only known strongly secure HID-AKE scheme is inefficient. In this paper, we propose a new HID-AKE scheme which achieves both strong security and efficiency. We prove that our scheme is eCK-secure (which ensures maximal-exposure-resilience including forward secrecy) without random oracles, while existing schemes is proved in the random oracle model. Moreover, the number of messages and pairing operations are independent of the hierarchy depth; that is, really scalable and practical for a large-system.
Koutarou SUZUKI Kazuki YONEYAMA
This paper studies Tripartite Key Exchange (3KE) which is a special case of Group Key Exchange. Though general one-round GKE satisfying advanced security properties such as forward secrecy and maximal-exposure-resilience (MEX-resilience) is not known, it can be efficiently constructed with the help of pairings in the 3KE case. In this paper, we introduce the first one-round 3KE which is MEX-resilient in the standard model, though existing one-round 3KE schemes are proved in the random oracle model (ROM), or not MEX-resilient. Each party broadcasts 4 group elements, and executes 14 pairing operations. Complexity is only three or four times larger in computation and communication than the existing most efficient MEX-resilient 3KE scheme in the ROM; thus, our protocol is adequately practical.
Yichao LU Xiao PENG Guifen TIAN Satoshi GOTO
Majority-logic algorithms are devised for decoding non-binary LDPC codes in order to reduce computational complexity. However, compared with conventional belief propagation algorithms, majority-logic algorithms suffer from severe bit error performance degradation. This paper presents a low-complexity reliability-based algorithm aiming at improving error correcting ability of majority-logic algorithms. Reliability measures for check nodes are novelly introduced to realize mutual update between variable message and check message, and hence more efficient reliability propagation can be achieved, similar to belief-propagation algorithm. Simulation results on NB-LDPC codes with different characteristics demonstrate that our algorithm can reduce the bit error ratio by more than one order of magnitude and the coding gain enhancement over ISRB-MLGD can reach 0.2-2.0dB, compared with both the ISRB-MLGD and IISRB-MLGD algorithms. Moreover, simulations on typical LDPC codes show that the computational complexity of the proposed algorithm is closely equivalent to ISRB-MLGD algorithm, and is less than 10% of Min-max algorithm. As a result, the proposed algorithm achieves a more efficient trade-off between decoding computational complexity and error performance.
Hirotoshi HONMA Yoko NAKAJIMA Yuta IGARASHI Shigeru MASUYAMA
Consider a simple undirected graph G = (V,E) with vertex set V and edge set E. Let G-u be a subgraph induced by the vertex set V-{u}. The distance δG(x,y) is defined as the length of the shortest path between vertices x and y in G. The vertex u ∈ V is a hinge vertex if there exist two vertices x,y ∈ V-{u} such that δG-u(x,y)>δG(x,y). Let U be a set consisting of all hinge vertices of G. The neighborhood of u is the set of all vertices adjacent to u and is denoted by N(u). We define d(u) = max{δG-u(x,y) | δG-u(x,y)>δG(x,y),x,y ∈ N(u)} for u ∈ U as detour degree of u. A maximum detour hinge vertex problem is to find a hinge vertex u with maximum d(u) in G. In this paper, we proposed an algorithm to find the maximum detour hinge vertex on an interval graph that runs in O(n2) time, where n is the number of vertices in the graph.
Shinsuke ODAGIRI Hiroyuki GOTO
For a fixed number of nodes, we focus on directed acyclic graphs in which there is not a shortcut. We find the case where the number of paths is maximized and its corresponding count of maximal paths. Considering this case is essential in solving large-scale scheduling problems using a PERT chart.
Tatsuya FUJIMOTO Tsunehiro YOSHINAGA Makoto SAKAMOTO
A cooperating system of finite automata (CS-FA) has more than one finite automata (FA's) and an input tape. These FA's operate independently on the input tape and can communicate with each other on the same cell of the input tape. For each k ≥ 1, let L[CS-1DFA(k)] (L[CS-1UFA(k)]) be the class of sets accepted by CS-FA's with k one-way deterministic finite automata (alternating finite automata with only universal states). We show that L[CS-1DFA(k+1)] - L[CS-1UFA(k)] ≠ ∅ and L[CS-1UFA(2)] - ∪1≤k<∞L[CS-1DFA(k)] ≠ ∅.
Ryuichi HARASAWA Yutaka SUEYOSHI Aichi KUDO
In the paper [4], the authors generalized the Cipolla-Lehmer method [2][5] for computing square roots in finite fields to the case of r-th roots with r prime, and compared it with the Adleman-Manders-Miller method [1] from the experimental point of view. In this paper, we compare these two methods from the theoretical point of view.
In this paper, we show a connection between #P and computing the (real) value of the high order derivative at the origin. Consider, as a problem instance, an integer b and a sufficiently often differentiable function F(x) that is given as a string. Then we consider computing the value F(b)(0) of the b-th derivative of F(x) at the origin. By showing a polynomial as an example, we show that we have FP = #P if we can compute log 2F(b)(0) up to certain precision. The previous statement holds even if F(x) is limited to a function that is analytic at any x ∈ R. It implies the hardness of computing the b-th value of a number sequence from the closed form of its generating function.
This paper proposes a new approach to defining and expressing algorithms: the notion of task logical algorithms. This notion allows the user to define an algorithm for a task T as a set of agents who can collectively perform T. This notion considerably simplifies the algorithm development process and can be seen as an integration of the sequential pseudocode and logical algorithms. This observation requires some changes to algorithm development process. We propose a two-step approach: the first step is to define an algorithm for a task T via a set of agents that can collectively perform T. The second step is to translate these agents into (higher-order) computability logic.
Rongchun LI Yong DOU Jie ZHOU Chen CHEN
The parallel interference cancellation (PIC) multiple input multiple output (MIMO) detection algorithm has bit error ratio (BER) performance comparable to the maximum likelihood (ML) algorithm but with complexity close to the simple linear detection algorithm such as zero forcing (ZF), minimum mean squared error (MMSE), and successive interference cancellation (SIC), etc. However, the throughput of PIC MIMO detector on central processing unit (CPU) cannot meet the requirement of wireless protocols. In order to reach the throughput required by the standards, the graphics processing unit (GPU) is exploited in this paper as the modem processor to accelerate the processing procedure of PIC MIMO detector. The parallelism of PIC algorithm is analyzed and the two-stage PIC detection is carefully developed to efficiently match the multi-core architecture. Several optimization methods are employed to enhance the throughput, such as the memory optimization and asynchronous data transfer. The experiment shows that our MIMO detector has excellent BER performance and the peak throughput is 337.84 Mega bits per second (Mbps), about 7x to 16x faster than that of CPU implementation with SSE2 optimization methods. The implemented MIMO detector has better computing throughput than recent GPU-based implementations.
Daisaburo YOSHIOKA Akio TSUNEDA
Since substitution boxes (S-boxes) are the only nonlinear portion of most block ciphers, the design of cryptographically strong and low-complexity S-boxes is of great importance in cryptosystems. In this paper, a new kind of S-boxes obtained by iterating a discretized piecewise linear map is proposed. The S-box has an implementation efficiency both in software and hardware. Moreover, the results of performance test show that the proposed S-box has good cryptographic properties.
Jian LIU Lusheng CHEN Xuan GUANG
In this paper, we provide several methods to construct nonlinear resilient functions with multiple good cryptographic properties, including high nonlinearity, high algebraic degree, and non-existence of linear structures. Firstly, we present an improvement on a known construction of resilient S-boxes such that the nonlinearity and the algebraic degree will become higher in some cases. Then a construction of highly nonlinear t-resilient Boolean functions without linear structures is given, whose algebraic degree achieves n-t-1, which is optimal for n-variable t-resilient Boolean functions. Furthermore, we construct a class of resilient S-boxes without linear structures, which possesses the highest nonlinearity and algebraic degree among all currently known constructions.
Hiroyuki MIURA Yasufumi HASHIMOTO Tsuyoshi TAKAGI
It is well known that solving randomly chosen Multivariate Quadratic equations over a finite field (MQ-Problem) is NP-hard, and the security of Multivariate Public Key Cryptosystems (MPKCs) is based on the MQ-Problem. However, this problem can be solved efficiently when the number of unknowns n is sufficiently greater than that of equations m (This is called “Underdefined”). Indeed, the algorithm by Kipnis et al. (Eurocrypt'99) can solve the MQ-Problem over a finite field of even characteristic in a polynomial-time of n when n ≥ m(m+1). Therefore, it is important to estimate the hardness of the MQ-Problem to evaluate the security of Multivariate Public Key Cryptosystems. We propose an algorithm in this paper that can solve the MQ-Problem in a polynomial-time of n when n ≥ m(m+3)/2, which has a wider applicable range than that by Kipnis et al. We will also compare our proposed algorithm with other known algorithms. Moreover, we implemented this algorithm with Magma and solved the MQ-Problem of m=28 and n=504, and it takes 78.7 seconds on a common PC.
Wentao LV Junfeng WANG Wenxian YU Zhen TAN
In compressed sensing, the design of the measurement matrix is a key work. In order to achieve a more precise reconstruction result, the columns of the measurement matrix should have better orthogonality or linear incoherence. A random matrix, like a Gaussian random matrix (GRM), is commonly adopted as the measurement matrix currently. However, the columns of the random matrix are only statistically-orthogonal. By substituting an orthogonal basis into the random matrix to construct a semi-random measurement matrix and by optimizing the mutual coherence between dictionary columns to approach a theoretical lower bound, the linear incoherence of the measurement matrix can be greatly improved. With this optimization measurement matrix, the signal can be reconstructed from its measures more precisely.
Hongyu HAN Daiyuan PENG Xing LIU
For frequency hopping spread spectrum communication systems, the average Hamming correlation (AHC) among frequency hopping sequences (FHSs) is an important performance indicator. In this letter, a sufficient and necessary condition for a set of FHSs with optimal AHC is given. Based on interleaved technique, a new construction for optimal AHC FHS sets is also proposed, which generalizes the construction of Chung and Yang. Several optimal AHC FHS sets with more flexible parameters not covered in the literature are obtained by the new construction, which are summarized in Table 1.
Honggyu JUNG Kwang-Yul KIM Yoan SHIN
We propose a cooperative compressed spectrum sensing scheme for correlated signals in wideband cognitive radio networks. In order to design a reconstruction algorithm which accurately recover the wideband signals from the compressed samples in low SNR (Signal-to-Noise Ratio) environments, we consider the multiple measurement vector model exploiting a sequence of input signals and propose a cooperative sparse Bayesian learning algorithm which models the temporal correlation of the input signals. Simulation results show that the proposed scheme outperforms existing compressed sensing algorithms for low SNRs.
Yunpyo HONG Juwon BYUN Youngjo KIM Jaeseok KIM
This letter proposes a pipelined architecture with prediction mode scheduling for high efficiency video coding (HEVC). An increased number of intra prediction modes in HEVC have introduced a new technique, named rough mode decision (RMD). This development, however, means that pipeline architectures for H.264 cannot be used in HEVC. The proposed scheme executes the RMD and the rate-distortion optimization (RDO) process simultaneously by grouping the intra prediction modes and changing the candidate selection method of the RMD algorithm. The proposed scheme reduces execution cycle by up to 26% with negligible coding loss.