Hikaru TSUCHIDA Takashi NISHIDE
Multiparty computation (MPC) is a cryptographic method that enables a set of parties to compute an arbitrary joint function of the private inputs of all parties and does not reveal any information other than the output. MPC based on a secret sharing scheme (SS-MPC) and garbled circuit (GC) is known as the most common MPC schemes. Another cryptographic method, homomorphic encryption (HE), computes an arbitrary function represented as a circuit by using ciphertexts without decrypting them. These technologies are in a trade-off relationship for the communication/round complexities, and the computation cost. The private decision tree evaluation (PDTE) is one of the key applications of these technologies. There exist several constant-round PDTE protocols based on GC, HE, or the hybrid schemes that are secure even if a malicious adversary who can deviate from protocol specifications corrupts some parties. There also exist other protocols based only on SS-MPC that are secure only if a semi-honest adversary who follows the protocol specification corrupts some parties. However, to the best of our knowledge, there are currently no constant-round PDTE protocols based only on SS-MPC that are secure against a malicious adversary. In this work, we propose a constant-round four-party PDTE protocol that achieves malicious security. Our protocol provides the PDTE securely and efficiently even when the communication environment has a large latency.
Kei FUJIMOTO Ko NATORI Masashi KANEKO Akinori SHIRAGA
Real-time applications are becoming more and more popular, and due to the demand for more compact and portable user devices, offloading terminal processes to edge servers is being considered. Moreover, it is necessary to process packets with low latency on edge servers, which are often virtualized for operability. When trying to achieve low-latency networking, the increase in server power consumption due to performance tuning and busy polling for fast packet receiving becomes a problem. Thus, we design and implement a low-latency and energy-efficient networking system, energy-efficient kernel busy poll (EE-KBP), which meets four requirements: (A) low latency in the order of microseconds for packet forwarding in a virtual server, (B) lower power consumption than existing solutions, (C) no need for application modification, and (D) no need for software redevelopment with each kernel security update. EE-KBP sets a polling thread in a Linux kernel that receives packets with low latency in polling mode while packets are arriving, and when no packets are arriving, it sleeps and lowers the CPU operating frequency. Evaluations indicate that EE-KBP achieves microsecond-order low-latency networking under most traffic conditions, and 1.4× to 3.1× higher throughput with lower power consumption than NAPI used in a Linux kernel.
Tomu MAKITA Atsuki NAGAO Tatsuki OKADA Kazuhisa SETO Junichi TERUYAMA
A branching program is a well-studied model of computation and a representation for Boolean functions. It is a directed acyclic graph with a unique root node, some accepting nodes, and some rejecting nodes. Except for the accepting and rejecting nodes, each node has a label with a variable and each outgoing edge of the node has a label with a 0/1 assignment of the variable. The satisfiability problem for branching programs is, given a branching program with n variables and m nodes, to determine if there exists some assignment that activates a consistent path from the root to an accepting node. The width of a branching program is the maximum number of nodes at any level. The satisfiability problem for width-2 branching programs is known to be NP-complete. In this paper, we present a satisfiability algorithm for width-2 branching programs with n variables and cn nodes, and show that its running time is poly(n)·2(1-µ(c))n, where µ(c)=1/2O(c log c). Our algorithm consists of two phases. First, we transform a given width-2 branching program to a set of some structured formulas that consist of AND and Exclusive-OR gates. Then, we check the satisfiability of these formulas by a greedy restriction method depending on the frequency of the occurrence of variables.
Van Hung PHAM Tuan Hung NGUYEN Hisashi MORISHITA
In a previous study, we proposed a new method based on copula theory to evaluate the detection performance of distributed-processing multistatic radar systems, in which the dependence of local decisions was modeled by a Gaussian copula with linear dependence and no tail dependence. However, we also noted that one main limitation of the study was the lack of investigations on the tail-dependence and nonlinear dependence among local detectors' inputs whose densities have long tails and are often used to model clutter and wanted signals in high-resolution radars. In this work, we attempt to overcome this shortcoming by extending the application of the proposed method to several types of multivariate copula-based dependence models to clarify the effects of tail-dependence and different dependence models on the system detection performance in detail. Our careful analysis provides two interesting and important clarifications: first, the detection performance degrades significantly with tail dependence; and second, this degradation mainly originates from the upper tail dependence, while the lower tail and nonlinear dependence unexpectedly improve the system performance.
Ningkang CHEN Ping WEI Lin GAO Huaguo ZHANG Hongshu LIAO
This paper aims to design multiple-input multiple-output (MIMO) radar receiving weights and transmitting waveforms, in order to obtain better spatial filtering performance and enhance the robustness in the case of signal-dependent interference and jointly inaccurate estimated angles of target and interference. Generally, an alternate iterative optimization algorithm is proposed for the joint design problem. Specifically, the receiving weights are designed by the generalized eigenvalue decomposition of the matrix which contains the estimated information of the target and interference. As the cost function of the transmitting waveform design is fractional, the fractional optimization problem is first converted into a secondary optimization problem. Based on the proposed algorithm, a closed-form solution of the waveform is given using the alternating projection. At the analysis stage, in the presence of estimated errors under the environment of signal-dependent interference, a robust signal-to-interference and noise ratio (SINR) performance is obtained using a small amount of calculation with an iterative procedure. Numerical examples verify the effectiveness of the performances of the designed waveform in terms of the SINR, beampattern and pulse compression.
Yuanwei HOU Yu GU Weiping LI Zhi LIU
The fast evolving credential attacks have been a great security challenge to current password-based information systems. Recently, biometrics factors like facial, iris, or fingerprint that are difficult to forge rise as key elements for designing passwordless authentication. However, capturing and analyzing such factors usually require special devices, hindering their feasibility and practicality. To this end, we present WiASK, a device-free WiFi sensing enabled Authentication System exploring Keystroke dynamics. More specifically, WiASK captures keystrokes of a user typing a pre-defined easy-to-remember string leveraging the existing WiFi infrastructure. But instead of focusing on the string itself which are vulnerable to password attacks, WiASK interprets the way it is typed, i.e., keystroke dynamics, into user identity, based on the biologically validated correlation between them. We prototype WiASK on the low-cost off-the-shelf WiFi devices and verify its performance in three real environments. Empirical results show that WiASK achieves on average 93.7% authentication accuracy, 2.5% false accept rate, and 5.1% false reject rate.
The road space rationing (RSR) method regulates a period in which a user group can make telephone calls in order to decrease the call attempt rate and induce calling parties to shorten their calls during disaster congestion. This paper investigates what settings of this indirect control induce more self-restraint and how the settings change calling parties' behavior using experimental psychology. Our experiments revealed that the length of the regulated period differently affected calling parties' behavior (call duration and call attempt rate) and indicated that the 60-min RSR method (i.e., 10 six-min periods) is the most effective setting against disaster congestion.
Kosuke KIMURA Masato YOSHIDA Keisuke KASAI Toshihiko HIROOKA Masataka NAKAZAWA
In this paper, we report an experimental and numerical analysis of ultrahigh-speed coherent Nyquist pulse transmission. First, we describe a low-nonlinearity dispersion compensator for ultrahigh-speed coherent Nyquist pulse transmission; it is composed of a chirped fiber Bragg grating (CFBG) and a liquid crystal on silicon (LCoS) device. By adopting CFBG instead of inverse dispersion fiber, the nonlinearity in a 160km transmission line was more than halved. Furthermore, by eliminating the group delay fluctuation of the CFBG with an LCoS device, the residual group delay was reduced to as low as 1.42ps over an 11nm bandwidth. Then, by using the transmission line with the newly constructed low-nonlinearity dispersion compensator, we succeeded in improving the BER performance of single-channel 15.3Tbit/s-160km transmission by one-third compared with that of a conventional dispersion-managed transmission line and obtained a spectral efficiency of 8.7bit/s/Hz. Furthermore, we numerically analyzed the BER performance of its Nyquist pulse transmission. The numerical results showed that the nonlinear impairment in the transmission line is the main factor limiting the transmission performance in a coherent Nyquist pulse transmission, which becomes more significant at higher baud rates.
Takaharu KATO Ikuko SHIMIZU Tomas PAJDLA
Selecting visually overlapping image pairs without any prior information is an essential task of large-scale structure from motion (SfM) pipelines. To address this problem, many state-of-the-art image retrieval systems adopt the idea of bag of visual words (BoVW) for computing image-pair similarity. In this paper, we present a method for improving the image pair selection using BoVW. Our method combines a conventional vector-based approach and a set-based approach. For the set similarity, we introduce a modified version of the Simpson (m-Simpson) coefficient. We show the advantage of this measure over three typical set similarity measures and demonstrate that the combination of vector similarity and the m-Simpson coefficient effectively reduces false positives and increases accuracy. To discuss the choice of vocabulary construction, we prepared both a sampled vocabulary on an evaluation dataset and a basic pre-trained vocabulary on a training dataset. In addition, we tested our method on vocabularies of different sizes. Our experimental results show that the proposed method dramatically improves precision scores especially on the sampled vocabulary and performs better than the state-of-the-art methods that use pre-trained vocabularies. We further introduce a method to determine the k value of top-k relevant searches for each image and show that it obtains higher precision at the same recall.
Keisuke INAZAWA Akihiro KASHIHARA
Self-review is essential to improving presentation, particularly for novice/unskilled researchers. In general, they could record a video of their presentation, and then check it out for self-review. However, they would be quite uncomfortable due to their appearance and voice in the video. They also struggle with in-depth self-review. To address these issues, we designed a presentation avatar that reproduces presentation made by researchers. The presentation avatar intends to increase self-awareness through self-reviewing. We also designed a checklist to aid in a detailed self-review, which includes points to be reviewed. This paper also demonstrates presentation avatar systems that use a virtual character and a robot, to allow novice/unskilled researchers as learners to self-review their own presentation using the checklist. The results of case studies with the systems indicate that the presentation avatar systems have the potential to promote self-review. In particular, we found that robot avatar promoted engagement in self-reviewing presentation.
We consider network security exercises where students construct virtual networks with User-mode Linux (UML) virtual machines and then execute attack and defense activities on these networks. In an older version of the exercise system, the students accessed the desktop screens of the remote servers running UMLs with Windows applications and then built networks by executing UML commands. However, performing the exercises remotely (e.g., due to the COVID-19 pandemic) resulted in difficulties due to factors such as the dependency of the work environment on specific operating systems, narrow-band networks, as well as issues in providing support for configuring UMLs. In this paper, a novel web-based hands-on system with intuitive and seamless operability and lightweight responsiveness is proposed in order to allow performing the considered exercises while avoiding the mentioned shortcomings. The system provides web pages for editing device layouts and cable connections by mouse operations intuitively, web pages connecting to UML terminals, and web pages for operating X clients running on UMLs. We carried out experiments for evaluating the proposed system on the usability, system performance, and quality of experience. The subjects offered positive assessments on the operability and no negative assessments on the responsiveness. As for command inputs in terminals, the response time was shorter and the traffic was much smaller in comparison with the older system. Furthermore, the exercises using nano required at least 16 kbps bandwidth and ones using wireshark required at least 2048 kbps bandwidth.
Kai YAN Tiejun ZHAO Muyun YANG
Graph layouts reveal global or local structures of graph data. However, there are few studies on assisting readers in better reconstructing a graph from a layout. This paper attempts to generate a layout whose edges can be reestablished. We reformulate the graph layout problem as an edge classification problem. The inputs are the vertex pairs, and the outputs are the edge existences. The trainable parameters are the laid-out coordinates of the vertices. We propose a binary classification-based graph layout (BCGL) framework in this paper. This layout aims to preserve the local structure of the graph and does not require the total similarity relationships of the vertices. We implement two concrete algorithms under the BCGL framework, evaluate our approach on a wide variety of datasets, and draw comparisons with several other methods. The evaluations verify the ability of the BCGL in local neighborhood preservation and its visual quality with some classic metrics.
Ryota SHIINA Toshihito FUJIWARA Tomohiro TANIGUCHI Shunsuke SARUWATARI Takashi WATANABE
In order to further reduce the transmission rate of multi-channel satellite broadcast signals, whose carrier-to-noise ratio (CNR fluctuates due to rainfall attenuation, we propose a novel digitized radio-over-fiber (DRoF) -based optical re-transmission system based on adaptive combination compression for ultra-high definition (UHD) broadcasting satellite (BS)/communications satellite (CS) broadcast signals. The proposed system reduces the optical re-transmission rate of BS/CS signals as much as possible while handling input CNR fluctuations. Therefore, the transmission rate of communication signals in time-division multiplexing (TDM) transmission is ensured, and network sharing of communication signals and broadcast signals via passive optical network (PON) is realized. Based on the ITU-R P.618-13 prediction model, an experimental evaluation is performed using estimates of the long-term statistics of attenuation due to rainfall. The attenuation is evaluated as a percentage of the time that long-term re-transmission service is available. It is shown that the proposed system is able to accommodate a wide range of rainfall attenuation and achieve a 99.988% time percentage for the duration of service provision. In order to show the rate reduction effect of the proposed system, the quantization bit reduction effect as a function of the input CNR, which depends on rainfall attenuation, is experimentally confirmed. Experiments show that service operation time of 99.978% can be achieved by 3-bit transmission. This means a 62.5% reduction in transmission rate is realized compared to conventional fixed quantization. Furthermore, the average quantization bit number in our system for service operation times is 3.000, indicating that most service operation times are covered by just 3-bit transmission.
Tetsuya ARAKI Hiroyuki MIYATA Shin-ichi NAKANO
Given a set of n disjoint intervals on a line and an integer k, we want to find k points in the intervals so that the minimum pairwise distance of the k points is maximized. Intuitively, given a set of n disjoint time intervals on a timeline, each of which is a time span we are allowed to check something, and an integer k, which is the number of times we will check something, we plan k checking times so that the checks occur at equal time intervals as much as possible, that is, we want to maximize the minimum time interval between the k checking times. We call the problem the k-dispersion problem on intervals. If we need to choose exactly one point in each interval, so k=n, and the disjoint intervals are given in the sorted order on the line, then two O(n) time algorithms to solve the problem are known. In this paper we give the first O(n) time algorithm to solve the problem for any constant k. Our algorithm works even if the disjoint intervals are given in any (not sorted) order. If the disjoint intervals are given in the sorted order on the line, then, by slightly modifying the algorithm, one can solve the problem in O(log n) time. This is the first sublinear time algorithm to solve the problem. Also we show some results on the k-dispersion problem on disks, including an FPTAS.
Hiroaki YAMANAKA Yuuichi TERANISHI Eiji KAWAI Hidehisa NAGANO Hiroaki HARAI
Running IoT applications on edge computing infrastructures has the benefits of low response times and efficient bandwidth usage. System verification on a testbed is required to deploy IoT applications in production environments. In a testbed, Docker containers are preferable for a smooth transition of tested application programs to production environments. In addition, the round-trip times (RTT) of Docker containers to clients must be ensured, according to the target application's response time requirements. However, in existing testbed systems, the RTTs between Docker containers and clients are not ensured. Thus, we must undergo a large amount of configuration data including RTTs between all pairs of wireless base station nodes and servers to set up a testbed environment. In this paper, we present an edge computing testbed system with simple application programming interfaces (API) for testbed users that ensures RTTs between Docker containers and clients. The proposed system automatically determines which servers to place Docker containers on according to virtual regions and the RTTs specified by the testbed users through APIs. The virtual regions provide reduced size information about the RTTs in a network. In the proposed system, the configuration data size is reduced to one divided by the number of the servers and the command arguments length is reduced to approximately one-third or less, whereas the increased system running time is 4.3s.
Keke ZHAO Peng SONG Shaokai LI Wenjing ZHANG Wenming ZHENG
In this letter, we present an adaptive weighted transfer subspace learning (AWTSL) method for cross-database speech emotion recognition (SER), which can efficiently eliminate the discrepancy between source and target databases. Specifically, on one hand, a subspace projection matrix is first learned to project the cross-database features into a common subspace. At the same time, each target sample can be represented by the source samples by using a sparse reconstruction matrix. On the other hand, we design an adaptive weighted matrix learning strategy, which can improve the reconstruction contribution of important features and eliminate the negative influence of redundant features. Finally, we conduct extensive experiments on four benchmark databases, and the experimental results demonstrate the efficacy of the proposed method.
Rui YANG Raphael SHU Hideki NAKAYAMA
Generative Adversarial Networks (GANs) are one of the most successful learning principles of generative models and were wildly applied to many generation tasks. In the beginning, the gradient penalty (GP) was applied to enforce the discriminator in GANs to satisfy Lipschitz continuity in Wasserstein GAN. Although the vanilla version of the gradient penalty was further modified for different purposes, seeking a better equilibrium and higher generation quality in adversarial learning remains challenging. Recently, DRAGAN was proposed to achieve the local linearity in a surrounding data manifold by applying the noised gradient penalty to promote the local convexity in model optimization. However, we show that their approach will impose a burden on satisfying Lipschitz continuity for the discriminator. Such conflict between Lipschitz continuity and local linearity in DRAGAN will result in poor equilibrium, and thus the generation quality is far from ideal. To this end, we propose a novel approach to benefit both local linearity and Lipschitz continuity for reaching a better equilibrium without conflict. In detail, we apply our synchronized activation function in the discriminator to receive a particular form of noised gradient penalty for achieving local linearity without losing the property of Lipschitz continuity in the discriminator. Experimental results show that our method can reach the superior quality of images and outperforms WGAN-GP, DiracGAN, and DRAGAN in terms of Inception Score and Fréchet Inception Distance on real-world datasets.
An interpretation method of inversion phenomena is newly proposed for backward transient scattered field components for both E- and H-polarizations when an ultra-wideband (UWB) pulse wave radiated from a line source is incident on a two-dimensional metal cylinder covered with a lossless dielectric medium layer (coated metal cylinder). A time-domain (TD) asymptotic solution, which is referred to as a TD saddle point technique (TD-SPT), is derived by applying the SPT in evaluating a backward transient scattered field which is expressed by an integral form. The TD-SPT is represented by a combination of a direct geometric optical ray (DGO) and a reflected GO (RGO) series, thereby being able to extract and calculate any backward transient scattered field component from a response waveform. The TD-SPT is useful in understanding the response waveform of a backward transient scattered field by a coated metal cylinder because it can give us the peak value and arrival time of any field component, namely DGO and RGO components, and interpret analytically inversion phenomenon of any field component. The accuracy, validity, and practicality of the TD-SPT are clarified by comparing it with two kinds of reference solutions.
This study considered an extension of a sparse regularization method with scaling, especially in thresholding methods that are simple and typical examples of sparse modeling. In this study, in the setting of a non-parametric orthogonal regression problem, we developed and analyzed a thresholding method in which soft thresholding estimators are independently expanded by empirical scaling values. The scaling values have a common hyper-parameter that is an order of expansion of an ideal scaling value to achieve hard thresholding. We simply refer to this estimator as a scaled soft thresholding estimator. The scaled soft thresholding method is a bridge method between soft and hard thresholding methods. This new estimator is indeed consistent with an adaptive LASSO estimator in the orthogonal case; i.e., it is thus an another derivation of an adaptive LASSO estimator. It is a general method that includes soft thresholding and non-negative garrote as special cases. We subsequently derived the degree of freedom of the scaled soft thresholding in calculating the Stein's unbiased risk estimate. We found that it is decomposed into the degree of freedom of soft thresholding and the remainder term connecting to the hard thresholding. As the degree of freedom reflects the degree of over-fitting, this implies that the scaled soft thresholding has an another source of over-fitting in addition to the number of un-removed components. The theoretical result was verified by a simple numerical example. In this process, we also focused on the non-monotonicity in the above remainder term of the degree of freedom and found that, in a sparse and large sample setting, it is mainly caused by useless components that are not related to the target function.
Yoshiaki TAKATA Ryoma SENDA Hiroyuki SEKI
Register pushdown system (RPDS) is an extension of pushdown system (PDS) that has registers for dealing with data values. An LTL model checking method for RPDS with regular valuations has been proposed in previous work; however, the method requires the register automata (RA) used for defining a regular valuation to be backward-deterministic. This paper proposes another approach to the same problem, in which the model checking problem for RPDS is reduced to that problem for PDS by constructing a PDS bisimulation equivalent to a given RPDS. This construction is simpler than the previous model checking method and does not require RAs deterministic or backward-deterministic, and the bisimulation equivalence clearly guarantees the correctness of the reduction. On the other hand, the proposed method requires every RPDS (and RA) to have the freshness property, in which whenever the RPDS updates a register with a data value not stored in any register or the stack top, the value should be fresh. This paper also shows that the model checking problem with regular valuations defined by general RA is undecidable, and thus the freshness constraint is essential in the proposed method.