Maaki SAKAI Kanon HOKAZONO Yoshiko HANADA
In this letter, we propose a method to introduce tabu search into Edge Assembly Crossover (EAX), which is an effective crossover method in solving the traveling salesman problem (TSP) using genetic algorithms. The proposed method, called EAX-tabu, archives the edges that have been exchanged over the past few generations into the tabu list for each individual and excludes them from the candidate edges to be exchanged when generating offspring by the crossover, thereby increasing the diversity of edges in the offspring. The effectiveness of the proposed method is demonstrated through numerical experiments on medium-sized instances of TSPLIB and VLSI TSP.
Keita EMURA Kaisei KAJITA Go OHTAKE
As a multi-receiver variant of public key encryption with keyword search (PEKS), broadcast encryption with keyword search (BEKS) has been proposed (Attrapadung et al. at ASIACRYPT 2006/Chatterjee-Mukherjee at INDOCRYPT 2018). Unlike broadcast encryption, no receiver anonymity is considered because the test algorithm takes a set of receivers as input and thus a set of receivers needs to be contained in a ciphertext. In this paper, we propose a generic construction of BEKS from anonymous and weakly robust 3-level hierarchical identity-based encryption (HIBE). The proposed generic construction provides outsider anonymity, where an adversary is allowed to obtain secret keys of outsiders who do not belong to the challenge sets, and provides sublinear-size ciphertext in terms of the number of receivers. Moreover, the proposed construction considers security against chosen-ciphertext attack (CCA) where an adversary is allowed to access a test oracle in the searchable encryption context. The proposed generic construction can be seen as an extension to the Fazio-Perera generic construction of anonymous broadcast encryption (PKC 2012) from anonymous and weakly robust identity-based encryption (IBE) and the Boneh et al. generic construction of PEKS (EUROCRYPT 2004) from anonymous IBE. We run the Fazio-Perera construction employs on the first-level identity and run the Boneh et al. generic construction on the second-level identity, i.e., a keyword is regarded as a second-level identity. The third-level identity is used for providing CCA security by employing one-time signatures. We also introduce weak robustness in the HIBE setting, and demonstrate that the Abdalla et al. generic transformation (TCC 2010/JoC 2018) for providing weak robustness to IBE works for HIBE with an appropriate parameter setting. We also explicitly introduce attractive concrete instantiations of the proposed generic construction from pairings and lattices, respectively.
Shoichi HIROSE Hidenori KUWAKADO
In 2005, Nandi introduced a class of double-block-length compression functions hπ(x) := (h(x), h(π(x))), where h is a random oracle with an n-bit output and π is a non-cryptographic public permutation. Nandi demonstrated that the collision resistance of hπ is optimal if π has no fixed point in the classical setting. Our study explores the collision resistance of hπ and the Merkle-Damgård hash function using hπ in the quantum random oracle model. Firstly, we reveal that the quantum collision resistance of hπ may not be optimal even if π has no fixed point. If π is an involution, then a colliding pair of inputs can be found for hπ with only O(2n/2) queries by the Grover search. Secondly, we present a sufficient condition on π for the optimal quantum collision resistance of hπ. This condition states that any collision attack needs Ω(22n/3) queries to find a colliding pair of inputs. The proof uses the recent technique of Zhandry’s compressed oracle. Thirdly, we show that the quantum collision resistance of the Merkle-Damgård hash function using hπ can be optimal even if π is an involution. Finally, we discuss the quantum collision resistance of double-block-length compression functions using a block cipher.
This article reviews the author’s group research achievements in analog/mixed-signal circuit and system area with introduction of how they came up with the ideas. Analog/mixed-signal circuits and systems have to be designed as well-balanced in many aspects, and coming up ideas needs some experiences and discussions with researchers. It is also heavily dependent on researchers. Here, the author’s group own experiences are presented as well as their research motivations.
Sofia SAHAB Jawad HAQBEEN Takayuki ITO
Despite the increasing use of conversational artificial intelligence (AI) in online discussion environments, few studies explore the application of AI as a facilitator in forming problem-solving debates and influencing opinions in cross-venue scenarios, particularly in diverse and war-ravaged countries. This study aims to investigate the impact of AI on enhancing participant engagement and collaborative problem-solving in online-mediated discussion environments, especially in diverse and heterogeneous discussion settings, such as the five cities in Afghanistan. We seek to assess the extent to which AI participation in online conversations succeeds by examining the depth of discussions and participants' contributions, comparing discussions facilitated by AI with those not facilitated by AI across different venues. The results are discussed with respect to forming and changing opinions with and without AI-mediated communication. The findings indicate that the number of opinions generated in AI-facilitated discussions significantly differs from discussions without AI support. Additionally, statistical analyses reveal quantitative disparities in online discourse sentiments when conversational AI is present compared to when it is absent. These findings contribute to a better understanding of the role of AI-mediated discussions and offer several practical and social implications, paving the way for future developments and improvements.
Juntong HONG Eunjong CHOI Osamu MIZUNO
Code search is a task to retrieve the most relevant code given a natural language query. Several recent studies proposed deep learning based methods use multi-encoder model to parse code into multi-field to represent code. These methods enhance the performance of the model by distinguish between similar codes and utilizing a relation matrix to bridge the code and query. However, these models require more computational resources and parameters than single-encoder models. Furthermore, utilizing the relation matrix that solely relies on max-pooling disregards the delivery of word alignment information. To alleviate these problems, we propose a combined alignment model for code search. We concatenate the multi-code fields into one sequence to represent code and use one encoding model to encode code features. Moreover, we transform the relation matrix using trainable vectors to avoid information losses. Then, we combine intra-modal and cross-modal attention to assign the salient words while matching the corresponding code and query. Finally, we apply the attention weight to code/query embedding and compute the cosine similarity. To evaluate the performance of our model, we compare our model with six previous models on two popular datasets. The results show that our model achieves 0.614 and 0.687 Top@1 performance, outperforming the best comparison models by 12.2% and 9.3%, respectively.
Public key authenticated encryption with keyword search (PAEKS) has been proposed, where a sender's secret key is required for encryption, and a trapdoor is associated with not only a keyword but also the sender. This setting allows us to prevent information leakage of keyword from trapdoors. Liu et al. (ASIACCS 2022) proposed a generic construction of PAEKS based on word-independent smooth projective hash functions (SPHFs) and PEKS. In this paper, we propose a new generic construction of PAEKS, which is more efficient than Liu et al.'s in the sense that we only use one SPHF, but Liu et al. used two SPHFs. In addition, for consistency we considered a security model that is stronger than Liu et al.'s. Briefly, Liu et al. considered only keywords even though a trapdoor is associated with not only a keyword but also a sender. Thus, a trapdoor associated with a sender should not work against ciphertexts generated by the secret key of another sender, even if the same keyword is associated. That is, in the previous definitions, there is room for a ciphertext to be searchable even though the sender was not specified when the trapdoor is generated, that violates the authenticity of PAKES. Our consistency definition considers a multi-sender setting and captures this case. In addition, for indistinguishability against chosen keyword attack (IND-CKA) and indistinguishability against inside keyword guessing attack (IND-IKGA), we use a stronger security model defined by Qin et al. (ProvSec 2021), where an adversary is allowed to query challenge keywords to the encryption and trapdoor oracles. We also highlight several issues associated with the Liu et al. construction in terms of hash functions, e.g., their construction does not satisfy the consistency that they claimed to hold.
Shota AKIYOSHI Yuzo TAENAKA Kazuya TSUKAMOTO Myung LEE
Cross-domain data fusion is becoming a key driver in the growth of numerous and diverse applications in the Internet of Things (IoT) era. We have proposed the concept of a new information platform, Geo-Centric Information Platform (GCIP), that enables IoT data fusion based on geolocation, i.e., produces spatio-temporal content (STC), and then provides the STC to users. In this environment, users cannot know in advance “when,” “where,” or “what type” of STC is being generated because the type and timing of STC generation vary dynamically with the diversity of IoT data generated in each geographical area. This makes it difficult to directly search for a specific STC requested by the user using the content identifier (domain name of URI or content name). To solve this problem, a new content discovery method that does not directly specify content identifiers is needed while taking into account (1) spatial and (2) temporal constraints. In our previous study, we proposed a content discovery method that considers only spatial constraints and did not consider temporal constraints. This paper proposes a new content discovery method that matches user requests with content metadata (topic) characteristics while taking into account spatial and temporal constraints. Simulation results show that the proposed method successfully discovers appropriate STC in response to a user request.
Yuma TSUCHIDA Kohei KUBO Hisashi KOGA
Similarity search for data streams has attracted much attention for information recommendation. In this context, recent leading works regard the latest W items in a data stream as an evolving set and reduce similarity search for data streams to set similarity search. Whereas they consider standard sets composed of items, this paper uniquely studies similarity search for text streams and treats evolving sets whose elements are texts. Specifically, we formulate a new continuous range search problem named the CTS problem (Continuous similarity search for Text Sets). The task of the CTS problem is to find all the text streams from the database whose similarity to the query becomes larger than a threshold ε. It abstracts a scenario in which a user-based recommendation system searches similar users from social networking services. The CTS is important because it allows both the query and the database to change dynamically. We develop a fast pruning-based algorithm for the CTS. Moreover, we discuss how to speed up it with the inverted index.
Takashi KURIMOTO Koji SASAYAMA Osamu AKASHI Kenjiro YAMANAKA Naoya KITAGAWA Shigeo URUSHIDANI
This paper describes the architectural design, services, and operation and monitoring functions of Science Information NETwork 6 (SINET6), a 400-Gigabit Ethernet-based academic backbone network launched on a nationwide scale in April 2022. In response to the requirements from universities and research institutions, SINET upgraded its world-class network speed, improved its accessibility, enhanced services and security, incorporated 5G mobile functions, and strengthened international connectivity. With fully-meshed connectivity and fast rerouting, it attains nationwide high performance and high reliability. The evaluation results of network performance are also reported.
In this paper, multi-input multi-output (MIMO) signal detection with random walk along a gradient descent direction using an intermediate search point is presented. As a low complexity MIMO signal detection schemes, a gradient descent algorithm with Metropolis-Hastings (MH) methods has been proposed. Random walk along a gradient descent direction speeds up the MH based search using the gradient of a least-squares cost function. However, the gradient vector may be discarded through QAM constellation quantization in some cases. For further performance improvement, this paper proposes an improved search scheme in which the gradient vector is stored for the next search iteration to generate an intermediate search point. The performance of the proposed scheme improves with higher order modulation symbols as compared with that of a conventional scheme. Numerical results obtained through computer simulation show that a bit error rate (BER) performance improves by 5dB at a BER of 10-3 for 64QAM symbols in a 16×16 MIMO system.
Shunta TERUI Katsuhisa YAMANAKA Takashi HIRAYAMA Takashi HORIYAMA Kazuhiro KURITA Takeaki UNO
We are given a set S of n points in the Euclidean plane. We assume that S is in general position. A simple polygon P is an empty polygon of S if each vertex of P is a point in S and every point in S is either outside P or a vertex of P. In this paper, we consider the problem of enumerating all the empty polygons of a given point set. To design an efficient enumeration algorithm, we use a reverse search by Avis and Fukuda with child lists. We propose an algorithm that enumerates all the empty polygons of S in O(n2|ε(S)|)-time, where ε(S) is the set of empty polygons of S. Moreover, by applying the same idea to the problem of enumerating surrounding polygons of a given point set S, we propose an enumeration algorithm that enumerates them in O(n2)-delay, while the known algorithm enumerates in O(n2 log n)-delay, where a surroundingpolygon of S is a polygon such that each vertex of the polygon is a point in S and every point in S is either inside the polygon or a vertex of the polygon.
In industry, automatic speech recognition has come to be a competitive feature for embedded products with poor hardware resources. In this work, we propose a tiny end-to-end speech recognition model that is lightweight and easily deployable on edge platforms. First, instead of sophisticated network structures, such as recurrent neural networks, transformers, etc., the model we propose mainly uses convolutional neural networks as its backbone. This ensures that our model is supported by most software development kits for embedded devices. Second, we adopt the basic unit of MobileNet-v3, which performs well in computer vision tasks, and integrate the features of the hidden layer at different scales, thus compressing the number of parameters of the model to less than 1 M and achieving an accuracy greater than that of some traditional models. Third, in order to further reduce the CPU computation, we directly extract acoustic representations from 1-dimensional speech waveforms and use a self-supervised learning approach to encourage the convergence of the model. Finally, to solve some problems where hardware resources are relatively weak, we use a prefix beam search decoder to dynamically extend the search path with an optimized pruning strategy and an additional initialism language model to capture the probability of between-words in advance and thus avoid premature pruning of correct words. In our experiments, according to a number of evaluation categories, our end-to-end model outperformed several tiny speech recognition models used for embedded devices in related work.
Many countries are facing the aging problem caused by the growth of the elderly population. Nursing home (NH) is a common solution to long-term care for the elderly. This paper develops a simulator to model elder behavior in an NH, which considers public areas where elders interact and imitates their general, group, and special activities. Elders have their preferences to decide activities taken by them. The simulator takes account of the movement of elders and abnormal events. Based on the simulator, two seeking methods are proposed for caregivers to search lost elders efficiently, which helps them fast find out elders who may incur accidents.
Takuto ARAI Daisei UCHIDA Tatsuhiko IWAKUNI Shuki WAI Naoki KITA
High gain antennas with narrow-beamforming are required to compensate for the high propagation loss expected in high frequency bands such as the millimeter wave and sub-terahertz wave bands, which are promising for achieving extremely high speeds and capacity. However using narrow-beamforming for initial access (IA) beam search in all directions incurs an excessive overhead. Using wide-beamforming can reduce the overhead for IA but it also shrinks the coverage area due to the lower beamforming gain. Here, it is assumed that there are some situations in which the required coverage distance differs depending on the direction from the antenna. For example, the distance to an floor for a ceiling-mounted antenna varies depending on the direction, and the distance to the obstruction becomes the required coverage distance for an antenna installation design that assumes line-of-sight. In this paper, we propose a novel IA beam search scheme with adaptive beam width control based on the distance to shield obstacles in each direction. Simulations and experiments show that the proposed method reduces the overhead by 20%-50% without shrinking the coverage area in shield environments compared to exhaustive beam search with narrow-beamforming.
Qi TENG Guowei TENG Xiang LI Ran MA Ping AN Zhenglong YANG
The latest versatile video coding (VVC) introduces some novel techniques such as quadtree with nested multi-type tree (QTMT), multiple transform selection (MTS) and multiple reference line (MRL). These tools improve compression efficiency compared with the previous standard H.265/HEVC, but they suffer from very high computational complexity. One of the most time-consuming parts of VVC intra coding is the coding tree unit (CTU) structure decision. In this paper, we propose a low-complexity multi-type tree (MT) pruning method for VVC intra coding. This method consists of lookahead search and MT pruning. The lookahead search process is performed to derive the approximate rate-distortion (RD) cost of each MT node at depth 2 or 3. Subsequently, the improbable MT nodes are pruned by different strategies under different cost errors. These strategies are designed according to the priority of the node. Experimental results show that the overall proposed algorithm can achieve 47.15% time saving with only 0.93% Bjøntegaard delta bit rate (BDBR) increase over natural scene sequences, and 45.39% time saving with 1.55% BDBR increase over screen content sequences, compared with the VVC reference software VTM 10.0. Such results demonstrate that our method achieves a good trade-off between computational complexity and compression quality compared to recent methods.
Baohang ZHANG Haichuan YANG Tao ZHENG Rong-Long WANG Shangce GAO
The equilibrium optimizer (EO) is a novel physics-based meta-heuristic optimization algorithm that is inspired by estimating dynamics and equilibrium states in controlled volume mass balance models. As a stochastic optimization algorithm, EO inevitably produces duplicated solutions, which is wasteful of valuable evaluation opportunities. In addition, an excessive number of duplicated solutions can increase the risk of the algorithm getting trapped in local optima. In this paper, an improved EO algorithm with a bis-population-based non-revisiting (BNR) mechanism is proposed, namely BEO. It aims to eliminate duplicate solutions generated by the population during iterations, thus avoiding wasted evaluation opportunities. Furthermore, when a revisited solution is detected, the BNR mechanism activates its unique archive population learning mechanism to assist the algorithm in generating a high-quality solution using the excellent genes in the historical information, which not only improves the algorithm's population diversity but also helps the algorithm get out of the local optimum dilemma. Experimental findings with the IEEE CEC2017 benchmark demonstrate that the proposed BEO algorithm outperforms other seven representative meta-heuristic optimization techniques, including the original EO algorithm.
Hiroaki YAMAMOTO Ryosuke ODA Yoshihiro WACHI Hiroshi FUJIWARA
A searchable symmetric encryption (SSE) scheme is a method that searches encrypted data without decrypting it. In this paper, we address the substring search problem such that for a set D of documents and a pattern p, we find all occurrences of p in D. Here, a document and a pattern are defined as a string. A directed acyclic word graph (DAWG), which is a deterministic finite automaton, is known for solving a substring search problem on a plaintext. We improve a DAWG so that all transitions of a DAWG have distinct symbols. Besides, we present a space-efficient and secure substring SSE scheme using an improved DAWG. The proposed substring SSE scheme consists of an index with a simple structure, and the size is O(n) for the total size n of documents.
Yohei WATANABE Takeshi NAKAI Kazuma OHARA Takuya NOJIMA Yexuan LIU Mitsugu IWAMOTO Kazuo OHTA
Searchable symmetric encryption (SSE) enables clients to search encrypted data. Curtmola et al. (ACM CCS 2006) formalized a model and security notions of SSE and proposed two concrete constructions called SSE-1 and SSE-2. After the seminal work by Curtmola et al., SSE becomes an active area of encrypted search. In this paper, we focus on two unnoticed problems in the seminal paper by Curtmola et al. First, we show that SSE-2 does not appropriately implement Curtmola et al.'s construction idea for dummy addition. We refine SSE-2's (and its variants') dummy-adding procedure to keep the number of dummies sufficiently many but as small as possible. We then show how to extend it to the dynamic setting while keeping the dummy-adding procedure work well and implement our scheme to show its practical efficiency. Second, we point out that the SSE-1 can cause a search error when a searched keyword is not contained in any document file stored at a server and show how to fix it.
Lu ZHANG Chengqun WANG Mengyuan FANG Weiqiang XU
To solve the problem of metamerism in the color reproduction process, various spectral reflectance reconstruction methods combined with neural network have been proposed in recent years. However, these methods are generally sensitive to initial values and can easily converge to local optimal solutions, especially on small data sets. In this paper, we propose a spectral reflectance reconstruction algorithm based on the Back Propagation Neural Network (BPNN) and an improved Sparrow Search Algorithm (SSA). In this algorithm, to solve the problem that BPNN is sensitive to initial values, we propose to use SSA to initialize BPNN, and we use the sine chaotic mapping to further improve the stability of the algorithm. In the experiment, we tested the proposed algorithm on the X-Rite ColorChecker Classic Mini Chart which contains 24 colors, the results show that the proposed algorithm has significantly better performance compared to other algorithms and moreover it can meet the needs of spectral reflectance reconstruction on small data sets. Code is avaible at https://github.com/LuraZhang/spectral-reflectance-reconsctuction.