Spectral graph theory provides an algebraic approach to investigate the characteristics of weighted networks using the eigenvalues and eigenvectors of a matrix (e.g., normalized Laplacian matrix) that represents the structure of the network. However, it is difficult to accurately represent the structures of large-scale and complex networks (e.g., social network) as a matrix. This difficulty can be avoided if there is a universality, such that the eigenvalues are independent of the detailed structure in large-scale and complex network. In this paper, we clarify Wigner's Semicircle Law for weighted networks as such a universality. The law indicates that the eigenvalues of the normalized Laplacian matrix of weighted networks can be calculated from a few network statistics (the average degree, average link weight, and square average link weight) when the weighted networks satisfy a sufficient condition of the node degrees and the link weights.
Liang ZHU Youguo WANG Jian LIU
Identifying the infection sources in a network, including the sponsor of a network rumor, the servers that inject computer virus into a computer network, or the zero-patient in an infectious disease network, plays a critical role in limiting the damage caused by the infection. A two-source estimator is firstly constructed on basis of partitions of infection regions in this paper. Meanwhile, the two-source estimation problem is transformed into calculating the expectation of permitted permutations count which can be simplified to a single-source estimation problem under determined infection region. A heuristic algorithm is also proposed to promote the estimator to general graphs in a Breadth-First-Search (BFS) fashion. Experimental results are provided to verify the performance of our method and illustrate variations of error detection in different networks.
Ryo HASE Mitsue IMAHORI Norihiko SHINOMIYA
The relationships between producers and consumers have changed radically by the recent growth of sharing economy. Promoting resource sharing can contribute to finding a solution to environmental issues (e.g. reducing food waste, consuming surplus electricity, and so on). Although prosumers have both roles as consumers and suppliers, matching between suppliers and consumers should be determined when the prosumers share resources. Especially, it is important to achieve envy-freeness that is a metric indicating how the number of prosumers feeling unfairness is kept small since the capacity of prosumers to supply resources is limited. Changing resource capacity and demand will make the situation more complex. This paper proposes a resource sharing model based on a temporal network and flows to realize envy-free resource sharing among prosumers. Experimental results demonstrate the deviation of envy among prosumers can be reduced by setting appropriate weights in a flow network.
Yusuke YANO Kengo IOKIBE Toshiaki TESHIMA Yoshitaka TOYOTA Toshihiro KATASHITA Yohei HORI
Side-channel (SC) leakage from a cryptographic device chip is simulated as the dynamic current flowing out of the chip. When evaluating the simulated current, an evaluation by comparison with an actual measurement is essential; however, it is difficult to compare them directly. This is because a measured waveform is typically the output voltage of probe placed at the observation position outside the chip, and the actual dynamic current is modified by several transfer impedances. Therefore, in this paper, the probe voltage is converted into the dynamic current by using an EMC macro-model of a cryptographic device being evaluated. This paper shows that both the amplitude and the SC analysis (correlation power analysis and measurements to disclosure) results of the simulated dynamic current were evaluated appropriately by using the EMC macro-model. An evaluation confirms that the shape of the simulated current matches the measured one; moreover, the SC analysis results agreed with the measured ones well. On the basis of the results, it is confirmed that a register-transfer level (RTL) simulation of the dynamic current gives a reasonable estimation of SC traces.
The membership check of a group is an important operation to implement discrete logarithm-based cryptography in practice securely. Since this check requires costly scalar multiplication or exponentiation operation, several efficient methods have been investigated. In the case of pairing-based cryptography, this is an extended research area of discrete logarithm-based cryptography, Barreto et al. (LATINCRYPT 2015) proposed a parameter choice called subgroup-secure elliptic curves. They also claimed that, in some schemes, if an elliptic curve is subgroup-secure, costly scalar multiplication or exponentiation operation can be omitted from the membership check of bilinear groups, which results in faster schemes than the original ones. They also noticed that some schemes would not maintain security with this omission. However, they did not show the explicit condition of what schemes become insecure with the omission. In this paper, we show a concrete example of insecurity in the sense of subgroup security to help developers understand what subgroup security is and what properties are preserved. In our conclusion, we recommend that the developers use the original membership check because it is a general and straightforward method to implement schemes securely. If the developers want to use the subgroup-secure elliptic curves and to omit the costly operation in a scheme for performance reasons, it is critical to carefully analyze again that correctness and security are preserved with the omission.
Yuki NANJO Masaaki SHIRASE Takuya KUSAKA Yasuyuki NOGAMI
To be suitable in practice, pairings are typically carried out by two steps, which consist of the Miller loop and final exponentiation. To improve the final exponentiation step of a pairing on the BLS family of pairing-friendly elliptic curves with embedding degree 15, the authors provide a new representation of the exponent. The proposal can achieve a more reduction of the calculation cost of the final exponentiation than the previous method by Fouotsa et al.
Takuma ITO Naoyuki SHINOHARA Shigenori UCHIYAMA
Multivariate public key cryptosystem (MPKC) is one of the major post quantum cryptosystems (PQC), and the National Institute of Standards and Technology (NIST) recently selected four MPKCs as candidates of their PQC. The security of MPKC depends on the hardness of solving systems of algebraic equations over finite fields. In particular, the multivariate quadratic (MQ) problem is that of solving such a system consisting of quadratic polynomials and is regarded as an important research subject in cryptography. In the Fukuoka MQ challenge project, the hardness of the MQ problem is discussed, and algorithms for solving the MQ problem and the computational results obtained by these algorithms are reported. Algorithms for computing Gröbner basis are used as the main tools for solving the MQ problem. For example, the F4 algorithm and M4GB algorithm have succeeded in solving many instances of the MQ problem provided by the project. In this paper, based on the F4-style algorithm, we present an efficient algorithm to solve the MQ problems with dense polynomials generated in the Fukuoka MQ challenge project. We experimentally show that our algorithm requires less computational time and memory for these MQ problems than the F4 algorithm and M4GB algorithm. We succeeded in solving Type II and III problems of Fukuoka MQ challenge using our algorithm when the number of variables was 37 in both problems.
Firas KRAIEM Shuji ISOBE Eisuke KOIZUMI Hiroki SHIZUYA
Knowledge-of-exponent assumptions (KEAs) are a somewhat controversial but nevertheless commonly used type of cryptographic assumptions. While traditional cryptographic assumptions simply assert that certain tasks (like factoring integers or computing discrete logarithms) cannot be performed efficiently, KEAs assert that certain tasks can be performed efficiently, but only in certain ways. The controversy surrounding those assumptions is due to their non-falsifiability, which is due to the way this idea is formalised, and to the general idea that these assumptions are “strong”. Nevertheless, their relationship to existing assumptions has not received much attention thus far. In this paper, we show that the first KEA (KEA1), introduced by Damgård in 1991, implies that computing discrete logarithms is equivalent to solving the computational Diffie-Hellman (CDH) problem. Since showing this equivalence in the standard setting (i.e., without the assumption that KEA1 holds) is a longstanding open question, this indicates that KEA1 (and KEAs in general) are indeed quite strong assumptions.
Kazuki NAGANUMA Takashi SUZUKI Hiroyuki TSUJI Tomoaki KIMURA
Gaussian integer has a potential to enhance the safety of elliptic curve cryptography (ECC) on system under the condition fixing bit length of integral and floating point types, in viewpoint of the order of a finite field. However, there seems to have been no algorithm which makes Gaussian integer ECC safer under the condition. We present the algorithm to enhance the safety of ECC under the condition. Then, we confirm our Gaussian integer ECC is safer in viewpoint of the order of finite field than rational integer ECC or Gaussian integer ECC of naive methods under the condition.
Soh YOSHIDA Mitsuji MUNEYASU Takahiro OGAWA Miki HASEYAMA
In this paper, we address the problem of analyzing topics, included in a social video group, to improve the retrieval performance of videos. Unlike previous methods that focused on an individual visual aspect of videos, the proposed method aims to leverage the “mutual reinforcement” of heterogeneous modalities such as tags and users associated with video on the Internet. To represent multiple types of relationships between each heterogeneous modality, the proposed method constructs three subgraphs: user-tag, video-video, and video-tag graphs. We combine the three types of graphs to obtain a heterogeneous graph. Then the extraction of latent features, i.e., topics, becomes feasible by applying graph-based soft clustering to the heterogeneous graph. By estimating the membership of each grouped cluster for each video, the proposed method defines a new video similarity measure. Since the understanding of video content is enhanced by exploiting latent features obtained from different types of data that complement each other, the performance of visual reranking is improved by the proposed method. Results of experiments on a video dataset that consists of YouTube-8M videos show the effectiveness of the proposed method, which achieves a 24.3% improvement in terms of the mean normalized discounted cumulative gain in a search ranking task compared with the baseline method.
Yuki NANJO Masaaki SHIRASE Takuya KUSAKA Yasuyuki NOGAMI
A quadratic extension field (QEF) defined by F1 = Fp[α]/(α2+1) is typically used for a supersingular isogeny Diffie-Hellman (SIDH). However, there exist other attractive QEFs Fi that result in a competitive or rather efficient performing the SIDH comparing with that of F1. To exploit these QEFs without a time-consuming computation of the initial setting, the authors propose to convert existing parameter sets defined over F1 to Fi by using an isomorphic map F1 → Fi.
Yuta NAKAHARA Toshiyasu MATSUSHIMA
A spatially “Mt. Fuji” coupled (SFC) low-density parity-check (LDPC) ensemble is a modified version of the spatially coupled (SC) LDPC ensemble. Its decoding error probability in the waterfall region has been studied only in an experimental manner. In this paper, we theoretically analyze it over the binary erasure channel by modifying the expected graph evolution (EGE) and covariance evolution (CE) that have been used to analyze the original SC-LDPC ensemble. In particular, we derive the initial condition modified for the SFC-LDPC ensemble. Then, unlike the SC-LDPC ensemble, the SFC-LDPC ensemble has a local minimum on the solution of the EGE and CE. Considering the property of it, we theoretically expect the waterfall curve of the SFC-LDPC ensemble is steeper than that of the SC-LDPC ensemble. In addition, we also confirm it by numerical experiments.
Haitao XIE Qingtao FAN Qian XIAO
Nowadays recommender systems (RS) keep drawing attention from academia, and collaborative filtering (CF) is the most successful technique for building RS. To overcome the inherent limitation, which is referred to as data sparsity in CF, various solutions are proposed to incorporate additional social information into recommendation processes, such as trust networks. However, existing methods suffer from multi-source data integration (i.e., fusion of social information and ratings), which is the basis for similarity calculation of user preferences. To this end, we propose a social collaborative filtering method based on novel trust metrics. Firstly, we use Graph Convolutional Networks (GCNs) to learn the associations between social information and user ratings while considering the underlying social network structures. Secondly, we measure the direct-trust values between neighbors by representing multi-source data as user ratings on popular items, and then calculate the indirect-trust values based on trust propagations. Thirdly, we employ all trust values to create a social regularization in user-item rating matrix factorization in order to avoid overfittings. The experiments on real datasets show that our approach outperforms the other state-of-the-art methods on usage of multi-source data to alleviate data sparsity.
Ryo SHIBATA Gou HOSOYA Hiroyuki YASHIMA
For insertion and deletion channels, there are many coding schemes based on low-density parity-check (LDPC) codes, such as spatially coupled (SC) LDPC codes and concatenated codes of irregular LDPC codes and markers. However, most of the previous works have problems, such as poor finite-length performance and unrealistic settings for codeword lengths and decoding iterations. Moreover, when using markers, the decoder receives log-likelihood (LLR) messages with different statistics depending on code bit position. In this paper, we propose a novel concatenation scheme using protograph-based LDPC code and markers that offers excellent asymptotic/finite-length performance and a structure that controls the irregularity of LLR messages. We also present a density evolution analysis and a simple optimization procedure for the proposed concatenated coding scheme. For two decoding scenarios involving decoding complexity, both asymptotic decoding thresholds and finite-length performance demonstrate that the newly designed concatenated coding scheme outperforms the existing counterparts: the irregular LDPC code with markers, the SC-LDPC code, and the protograph LDPC code, which is optimized for an additive white Gaussian noise channel, with markers.
Eiji MIYANO Toshiki SAITOH Ryuhei UEHARA Tsuyoshi YAGITA Tom C. van der ZANDEN
This paper introduces the maximization version of the k-path vertex cover problem, called the MAXIMUM K-PATH VERTEX COVER problem (MaxPkVC for short): A path consisting of k vertices, i.e., a path of length k-1 is called a k-path. If a k-path Pk includes a vertex v in a vertex set S, then we say that v or S covers Pk. Given a graph G=(V, E) and an integer s, the goal of MaxPkVC is to find a vertex subset S⊆V of at most s vertices such that the number of k-paths covered by S is maximized. The problem MaxPkVC is generally NP-hard. In this paper we consider the tractability/intractability of MaxPkVC on subclasses of graphs. We prove that MaxP3VC remains NP-hard even for split graphs. Furthermore, if the input graph is restricted to graphs with constant bounded treewidth, then MaxP3VC can be solved in polynomial time.
Hiroshi ONUKI Yusuke AIKAWA Tsutomu YAMAZAKI Tsuyoshi TAKAGI
At ASIACRYPT 2018, Castryck, Lange, Martindale, Panny and Renes proposed CSIDH, which is a key-exchange protocol based on isogenies between elliptic curves, and a candidate for post-quantum cryptography. However, the implementation by Castryck et al. is not constant-time. Specifically, a part of the secret key could be recovered by the side-channel attacks. Recently, Meyer, Campos, and Reith proposed a constant-time implementation of CSIDH by introducing dummy isogenies and taking secret exponents only from intervals of non-negative integers. Their non-negative intervals make the calculation cost of their implementation of CSIDH twice that of the worst case of the standard (variable-time) implementation of CSIDH. In this paper, we propose a more efficient constant-time algorithm that takes secret exponents from intervals symmetric with respect to the zero. For using these intervals, we need to keep two torsion points on an elliptic curve and calculation for these points. We evaluate the costs of our implementation and that of Meyer et al. in terms of the number of operations on a finite prime field. Our evaluation shows that our constant-time implementation of CSIDH reduces the calculation cost by 28% compared with the implementation by Mayer et al. We also implemented our algorithm by extending the implementation in C of Meyer et al. (originally from Castryck et al.). Then our implementation achieved 152 million clock cycles, which is about 29% faster than that of Meyer et al. and confirms the above reduction ratio in our cost evaluation.
Dongliang CHEN Peng SONG Wenjing ZHANG Weijian ZHANG Bingui XU Xuan ZHOU
In this letter, we propose a novel robust transferable subspace learning (RTSL) method for cross-corpus facial expression recognition. In this method, on one hand, we present a novel distance metric algorithm, which jointly considers the local and global distance distribution measure, to reduce the cross-corpus mismatch. On the other hand, we design a label guidance strategy to improve the discriminate ability of subspace. Thus, the RTSL is much more robust to the cross-corpus recognition problem than traditional transfer learning methods. We conduct extensive experiments on several facial expression corpora to evaluate the recognition performance of RTSL. The results demonstrate the superiority of the proposed method over some state-of-the-art methods.
The paper deals with the shortest path-based in-trees on a grid graph. There a root is supposed to move among all vertices. As such a spanning mobility pattern, root trajectories based on a Hamilton path or cycle are discussed. Along such a trajectory, each vertex randomly selects the next hop on the shortest path to the root. Under those assumptions, this paper shows that a root trajectory termed an S-path provides the minimum expected symmetric difference. Numerical experiments show that another trajectory termed a Right-cycle also provides the minimum result.
A limited number of types of sound event occur in an acoustic scene and some sound events tend to co-occur in the scene; for example, the sound events “dishes” and “glass jingling” are likely to co-occur in the acoustic scene “cooking.” In this paper, we propose a method of sound event detection using graph Laplacian regularization with sound event co-occurrence taken into account. In the proposed method, the occurrences of sound events are expressed as a graph whose nodes indicate the frequencies of event occurrence and whose edges indicate the sound event co-occurrences. This graph representation is then utilized for the model training of sound event detection, which is optimized under an objective function with a regularization term considering the graph structure of sound event occurrence and co-occurrence. Evaluation experiments using the TUT Sound Events 2016 and 2017 detasets, and the TUT Acoustic Scenes 2016 dataset show that the proposed method improves the performance of sound event detection by 7.9 percentage points compared with the conventional CNN-BiGRU-based detection method in terms of the segment-based F1 score. In particular, the experimental results indicate that the proposed method enables the detection of co-occurring sound events more accurately than the conventional method.
Saung Hnin Pwint OO Nguyen Duy HUNG Thanaruk THEERAMUNKONG
While existing inference engines solved real world problems using probabilistic knowledge representation, one challenging task is to efficiently utilize the representation under a situation of uncertainty during conflict resolution. This paper presents a new approach to straightforwardly combine a rule-based system (RB) with a probabilistic graphical inference framework, i.e., naïve Bayesian network (BN), towards probabilistic argumentation via a so-called probabilistic assumption-based argumentation (PABA) framework. A rule-based system (RB) formalizes its rules into defeasible logic under the assumption-based argumentation (ABA) framework while the Bayesian network (BN) provides probabilistic reasoning. By knowledge integration, while the former provides a solid testbed for inference, the latter helps the former to solve persistent conflicts by setting an acceptance threshold. By experiments, effectiveness of this approach on conflict resolution is shown via an example of liver disorder diagnosis.