1-4hit |
Homomorphic encryption (HE) is a promising approach for privacy-preserving applications, enabling a third party to assess functions on encrypted data. However, problems persist in implementing privacy-preserving applications through HE, including 1) long function evaluation latency and 2) limited HE primitives only allowing us to perform additions and multiplications. A homomorphic lookup-table (LUT) method has emerged to solve the above problems and enhance function evaluation efficiency. By leveraging homomorphic LUTs, intricate operations can be substituted. Previously proposed LUTs use bit-wise HE, such as TFHE, to evaluate single-input functions. However, the latency increases with the bit-length of the function’s input(s) and output. Additionally, an efficient implementation of multi-input functions remains an open question. This paper proposes a novel LUT-based privacy-preserving function evaluation method to handle multi-input functions while reducing the latency by adopting word-wise HE. Our optimization strategy adjusts table sizes to minimize the latency while preserving function output accuracy, especially for common machine-learning functions. Through our experimental evaluation utilizing the BFV scheme of the Microsoft SEAL library, we confirmed the runtime of arbitrary functions whose LUTs consist of all input-output combinations represented by given input bits: 1) single-input 12-bit functions in 0.14 s, 2) single-input 18-bit functions in 2.53 s, 3) two-input 6-bit functions in 0.17 s, and 4) three-input 4-bit functions in 0.20 s, employing four threads. Besides, we confirmed that our proposed table size optimization strategy worked well, achieving 1.2 times speed up with the same absolute error of order of magnitude of -4 (a × 10-4 where 1/$\sqrt{10}$ ≤ a < $\sqrt{10})$ for Swish and 1.9 times speed up for ReLU while decreasing the absolute error from order -2 to -4 compared to the baseline, i.e., polynomial approximation.
The increasing amount of fake news is a growing problem that will progressively worsen in our interconnected world. Machine learning, particularly deep learning, is being used to detect misinformation; however, the models employed are essentially black boxes, and thus are uninterpretable. This paper presents an overview of explainable fake news detection models. Specifically, we first review the existing models, datasets, evaluation techniques, and visualization processes. Subsequently, possible improvements in this field are identified and discussed.
Yuetsu KODAMA Hirohumi SAKANE Mitsuhisa SATO Hayato YAMANA Shuichi SAKAI Yoshinori YAMAGUCHI
Communication latency is central to multiprocessor design. This study presents the design principles of the EM-X distributed-memory multiprocessor towards tolerating communication latency. The EM-X overlaps computation with communication for latency tolerance by multithreading. In particular, we present two types of hardware support for remote memory access: (1) priority-based packet scheduling for thread invocation, and (2) direct remote memory access. The priority-based scheduling policy extends a FIFO ordered thread invocation policy to adopt to different computational needs. The direct remote memory access is designed to overlap remote memory operations with thread execution. The 80-processor prototype of EM-X is developed and is operational since December 1995. We execute several programs on the machine and evaluate how the EM-X effectively overlaps computation with communication toward tolerating communication latency for high performance parallel computing.
In recommending to another individual an item that one loves, accuracy is important, however in most cases, focusing only on accuracy generates less satisfactory recommendations. Studies have repeatedly pointed out that aspects that go beyond accuracy — such as the diversity and novelty of the recommended items — are as important as accuracy in making a satisfactory recommendation. Despite their importance, there is no global consensus about definitions and evaluations regarding beyond-accuracy aspects, as such aspects closely relate to the subjective sensibility of user satisfaction. In addition, devising algorithms for this purpose is difficult, because algorithms concurrently pursue the aspects in trade-off relation (i.e., accuracy vs. novelty). In the aforementioned situation, for researchers initiating a study in this domain, it is important to obtain a systematically integrated view of the domain. This paper reports the results of a survey of about 70 studies published over the last 15 years, each of which addresses recommendations that consider beyond-accuracy aspects. From this survey, we identify diversity, novelty, and coverage as important aspects in achieving serendipity and popularity unbiasedness — factors that are important to user satisfaction and business profits, respectively. The five major groups of algorithms that tackle the beyond-accuracy aspects are multi-objective, modified collaborative filtering (CF), clustering, graph, and hybrid; we then classify and describe algorithms as per this typology. The off-line evaluation metrics and user studies carried out by the studies are also described. Based on the survey results, we assert that there is a lot of room for research in the domain. Especially, personalization and generalization are considered important issues that should be addressed in future research (e.g., automatic per-user-trade-off among the aspects, and properly establishing beyond-accuracy aspects for various types of applications or algorithms).