Fuyuki KIHARA Chihiro MATSUI Ken TAKEUCHI
In this work, we propose a 1T1R ReRAM CiM architecture for Hyperdimensional Computing (HDC). The number of Source Lines and Bit Lines is reduced by introducing memory cells that are connected in series, which is especially advantageous when using a 3D implementation. The results of CiM operations contain errors, but HDC is robust against them, so that even if the XNOR operation has an error of 25%, the inference accuracy remains above 90%.
Reliability is an important figure of merit of the system and it must be satisfied in safety-critical applications. This paper considers parallel applications on heterogeneous embedded systems and proposes a two-phase algorithm framework to minimize energy consumption for satisfying applications’ reliability requirement. The first phase is for initial assignment and the second phase is for either satisfying the reliability requirement or improving energy efficiency. Specifically, when the application’s reliability requirement cannot be achieved via the initial assignment, an algorithm for enhancing the reliability of tasks is designed to satisfy the application’s reliability requirement. Considering that the reliability of initial assignment may exceed the application’s reliability requirement, an algorithm for reducing the execution frequency of tasks is designed to improve energy efficiency. The proposed algorithms are compared with existing algorithms by using real parallel applications. Experimental results demonstrate that the proposed algorithms consume less energy while satisfying the application’s reliability requirements.
Rina TAGAMI Hiroki KOBAYASHI Shuichi AKIZUKI Manabu HASHIMOTO
Due to the revitalization of the semiconductor industry and efforts to reduce labor and unmanned operations in the retail and food manufacturing industries, objects to be recognized at production sites are increasingly diversified in color and design. Depending on the target objects, it may be more reliable to process only color information, while intensity information may be better, or a combination of color and intensity information may be better. However, there are not many conventional method for optimizing the color and intensity information to be used, and deep learning is too costly for production sites. In this paper, we optimize the combination of the color and intensity information of a small number of pixels used for matching in the framework of template matching, on the basis of the mutual relationship between the target object and surrounding objects. We propose a fast and reliable matching method using these few pixels. Pixels with a low pixel pattern frequency are selected from color and grayscale images of the target object, and pixels that are highly discriminative from surrounding objects are carefully selected from these pixels. The use of color and intensity information makes the method highly versatile for object design. The use of a small number of pixels that are not shared by the target and surrounding objects provides high robustness to the surrounding objects and enables fast matching. Experiments using real images have confirmed that when 14 pixels are used for matching, the processing time is 6.3 msec and the recognition success rate is 99.7%. The proposed method also showed better positional accuracy than the comparison method, and the optimized pixels had a higher recognition success rate than the non-optimized pixels.
Yuka KO Katsuhito SUDOH Sakriani SAKTI Satoshi NAKAMURA
End-to-end speech translation (ST) directly renders source language speech to the target language without intermediate automatic speech recognition (ASR) output as in a cascade approach. End-to-end ST avoids error propagation from intermediate ASR results. Although recent attempts have applied multi-task learning using an auxiliary task of ASR to improve ST performance, they use cross-entropy loss to one-hot references in the ASR task, and the trained ST models do not consider possible ASR confusion. In this study, we propose a novel multi-task learning framework for end-to-end STs leveraged by ASR-based loss against posterior distributions obtained using a pre-trained ASR model called ASR posterior-based loss (ASR-PBL). The ASR-PBL method, which enables a ST model to reflect possible ASR confusion among competing hypotheses with similar pronunciations, can be applied to one of the strong multi-task ST baseline models with Hybrid CTC/Attention ASR task loss. In our experiments on the Fisher Spanish-to-English corpus, the proposed method demonstrated better BLEU results than the baseline that used standard CE loss.
Wenxia BAO An LIN Hua HUANG Xianjun YANG Hemu CHEN
Recent years have seen remarkable progress in human pose estimation. However, manual annotation of keypoints remains tedious and imprecise. To alleviate this problem, this paper proposes a novel method called Multi-Scale Contrastive Learning (MSCL). This method uses a siamese network structure with upper and lower branches that capture diffirent views of the same image. Each branch uses a backbone network to extract image representations, employing multi-scale feature vectors to capture information. These feature vectors are then passed through an enhanced feature pyramid for fusion, producing more robust feature representations. The feature vectors are then further encoded by mapping and prediction heads to predict the feature vector of another view. Using negative cosine similarity between vectors as a loss function, the backbone network is pre-trained on a large-scale unlabeled dataset, enhancing its capacity to extract visual representations. Finally, transfer learning is performed on a small amount of labelled data for the pose estimation task. Experiments on COCO datasets show significant improvements in Average Precision (AP) of 1.8%, 0.9%, and 1.2% with 1%, 5%, and 10% labelled data on COCO. In addition, the Percentage of Correct Keypoints (PCK) improves by 0.5% on MPII&AIC, outperforming mainstream contrastive learning methods.
Jiakai LI Jianyong DUAN Hao WANG Li HE Qing ZHANG
Chinese spelling correction is a foundational task in natural language processing that aims to detect and correct spelling errors in text. Most spelling corrections in Chinese used multimodal information to model the relationship between incorrect and correct characters. However, feature information mismatch occured during fusion result from the different sources of features, causing the importance relationships between different modalities to be ignored, which in turn restricted the model from learning in an efficient manner. To this end, this paper proposes a multimodal language model-based Chinese spelling corrector, named as MISpeller. The method, based on ChineseBERT as the basic model, allows the comprehensive capture and fusion of character semantic information, phonetic information and graphic information in a single model without the need to construct additional neural networks, and realises the phenomenon of unequal fusion of multi-feature information. In addition, in order to solve the overcorrection issues, the replication mechanism is further introduced, and the replication factor is used as the dynamic weight to efficiently fuse the multimodal information. The model is able to control the proportion of original characters and predicted characters according to different input texts, and it can learn more specifically where errors occur. Experiments conducted on the SIGHAN benchmark show that the proposed model achieves the state-of-the-art performance of the F1 score at the correction level by an average of 4.36%, which validates the effectiveness of the model.
Yuxin HUANG Yuanlin YANG Enchang ZHU Yin LIANG Yantuan XIAN
Chinese-Vietnamese cross-lingual event retrieval aims to retrieve the Vietnamese sentence describing the same event as a given Chinese query sentence from a set of Vietnamese sentences. Existing mainstream cross-lingual event retrieval methods rely on extracting textual representations from query texts and calculating their similarity with textual representations in other language candidate sets. However, these methods ignore the difference in event elements present during Chinese-Vietnamese cross-language retrieval. Consequently, sentences with similar meanings but different event elements may be incorrectly considered to describe the same event. To address this problem, we propose a cross-lingual retrieval method that integrates event elements. We introduce event elements as an additional supervisory signal, where we calculate the semantic similarity of event elements in two sentences using an attention mechanism to determine the attention score of the event elements. This allows us to establish a one-to-one correspondence between event elements in the text. Additionally, we leverage the multilingual pre-trained language model fine-tuned based on contrastive learning to obtain cross-language sentence representation to calculate the semantic similarity of the sentence texts. By combining these two approaches, we obtain the final text similarity score. Experimental results demonstrate that our proposed method achieves higher retrieval accuracy than the baseline model.
Weizhi WANG Lei XIA Zhuo ZHANG Xiankai MENG
Smart contracts, as a form of digital protocol, are computer programs designed for the automatic execution, control, and recording of contractual terms. They permit transactions to be conducted without the need for an intermediary. However, the economic property of smart contracts makes their vulnerabilities susceptible to hacking attacks, leading to significant losses. In this paper, we introduce a smart contract timestamp vulnerability detection technique HomoDec based on code homogeneity. The core idea of this technique involves comparing the homogeneity between the code of the test smart contract and the existing smart contract vulnerability codes in the database to determine whether the tested code has a timestamp vulnerability. Specifically, HomoDec first explores how to vectorize smart contracts reasonably and efficiently, representing smart contract code as a high-dimensional vector containing features of code vulnerabilities. Subsequently, it investigates methods to determine the homogeneity between the test codes and the ones in vulnerability code base, enabling the detection of potential timestamp vulnerabilities in smart contract code.
Na XING Lu LI Ye ZHANG Shiyi YANG
Unmanned aerial vehicle (UAV)-assisted systems have attracted a lot of attention due to its high probability of line-of-sight (LoS) connections and flexible deployment. In this paper, we aim to minimize the upload time required for the UAV to collect information from the sensor nodes in disaster scenario, while optimizing the deployment position of UAV. In order to get the deployment solution quickly, a data-driven approach is proposed in which an optimization strategy acts as the expert. Considering that images could capture the spatial configurations well, we use a convolutional neural network (CNN) to learn how to place the UAV. In the end, the simulation results demonstrate the effectiveness and generalization of the proposed method. After training, our CNN can generate UAV configuration faster than the general optimization-based algorithm.
Zhe WANG Zhe-Ming LU Hao LUO Yang-Ming ZHENG
To accurately extract tabular data, we propose a novel cell-based tabular data extraction model (TDEM). The key of TDEM is to utilize grayscale projection of row separation lines, coupled with table masks and column masks generated by the VGG-19 neural network, to segment each individual cell from the input image of the table. In this way, the text content of the table is extracted from a specific single cell, which greatly improves the accuracy of table recognition.
Zheqing ZHANG Hao ZHOU Chuan LI Weiwei JIANG
Single-image dehazing is a challenging task in computer vision research. Aiming at the limitations of traditional convolutional neural network representation capabilities and the high computational overhead of the self-attention mechanism in recent years, we proposed image attention and designed a single image dehazing network based on the image attention: IAD-Net. The proposed image attention is a plug-and-play module with the ability of global modeling. IAD-Net is a parallel network structure that combines the global modeling ability of image attention and the local modeling ability of convolution, so that the network can learn global and local features. The proposed network model has excellent feature learning ability and feature expression ability, has low computational overhead, and also improves the detail information of hazy images. Experiments verify the effectiveness of the image attention module and the competitiveness of IAD-Net with state-of-the-art methods.
Takeru INOUE Norihito YASUDA Hidetomo NABESHIMA Masaaki NISHINO Shuhei DENZUMI Shin-ichi MINATO
This paper reports on the details of the International Competition on Graph Counting Algorithms (ICGCA) held in 2023. The graph counting problem is to count the subgraphs satisfying specified constraints on a given graph. The problem belongs to #P-complete, a computationally tough class. Since many essential systems in modern society, e.g., infrastructure networks, are often represented as graphs, graph counting algorithms are a key technology to efficiently scan all the subgraphs representing the feasible states of the system. In the ICGCA, contestants were asked to count the paths on a graph under a length constraint. The benchmark set included 150 challenging instances, emphasizing graphs resembling infrastructure networks. Eleven solvers were submitted and ranked by the number of benchmarks correctly solved within a time limit. The winning solver, TLDC, was designed based on three fundamental approaches: backtracking search, dynamic programming, and model counting or #SAT (a counting version of Boolean satisfiability). Detailed analyses show that each approach has its own strengths, and one approach is unlikely to dominate the others. The codes and papers of the participating solvers are available: https://afsa.jp/icgca/.
Tetsuya ARAKI Shin-ichi NAKANO
The dispersion problem is a variant of facility location problems, that has been extensively studied. Given a polygon with n edges on a plane we want to find k points in the polygon so that the minimum pairwise Euclidean distance of the k points is maximized. We call the problem the k-dispersion problem in a polygon. Intuitively, for an island, we want to locate k drone bases far away from each other in flying distance to avoid congestion in the sky. In this paper, we give a polynomial-time approximation scheme (PTAS) for this problem when k is a constant and ε < 1 (where ε is a positive real number). Our proposed algorithm runs in O(((1/ε)2 + n/ε)k) time with 1/(1 + ε) approximation, the first PTAS developed for this problem. Additionally, we consider three variations of the dispersion problem and design a PTAS for each of them.
Keita EMURA Kaisei KAJITA Go OHTAKE
As a multi-receiver variant of public key encryption with keyword search (PEKS), broadcast encryption with keyword search (BEKS) has been proposed (Attrapadung et al. at ASIACRYPT 2006/Chatterjee-Mukherjee at INDOCRYPT 2018). Unlike broadcast encryption, no receiver anonymity is considered because the test algorithm takes a set of receivers as input and thus a set of receivers needs to be contained in a ciphertext. In this paper, we propose a generic construction of BEKS from anonymous and weakly robust 3-level hierarchical identity-based encryption (HIBE). The proposed generic construction provides outsider anonymity, where an adversary is allowed to obtain secret keys of outsiders who do not belong to the challenge sets, and provides sublinear-size ciphertext in terms of the number of receivers. Moreover, the proposed construction considers security against chosen-ciphertext attack (CCA) where an adversary is allowed to access a test oracle in the searchable encryption context. The proposed generic construction can be seen as an extension to the Fazio-Perera generic construction of anonymous broadcast encryption (PKC 2012) from anonymous and weakly robust identity-based encryption (IBE) and the Boneh et al. generic construction of PEKS (EUROCRYPT 2004) from anonymous IBE. We run the Fazio-Perera construction employs on the first-level identity and run the Boneh et al. generic construction on the second-level identity, i.e., a keyword is regarded as a second-level identity. The third-level identity is used for providing CCA security by employing one-time signatures. We also introduce weak robustness in the HIBE setting, and demonstrate that the Abdalla et al. generic transformation (TCC 2010/JoC 2018) for providing weak robustness to IBE works for HIBE with an appropriate parameter setting. We also explicitly introduce attractive concrete instantiations of the proposed generic construction from pairings and lattices, respectively.
Shoichi HIROSE Hidenori KUWAKADO
In 2005, Nandi introduced a class of double-block-length compression functions hπ(x) := (h(x), h(π(x))), where h is a random oracle with an n-bit output and π is a non-cryptographic public permutation. Nandi demonstrated that the collision resistance of hπ is optimal if π has no fixed point in the classical setting. Our study explores the collision resistance of hπ and the Merkle-Damgård hash function using hπ in the quantum random oracle model. Firstly, we reveal that the quantum collision resistance of hπ may not be optimal even if π has no fixed point. If π is an involution, then a colliding pair of inputs can be found for hπ with only O(2n/2) queries by the Grover search. Secondly, we present a sufficient condition on π for the optimal quantum collision resistance of hπ. This condition states that any collision attack needs Ω(22n/3) queries to find a colliding pair of inputs. The proof uses the recent technique of Zhandry’s compressed oracle. Thirdly, we show that the quantum collision resistance of the Merkle-Damgård hash function using hπ can be optimal even if π is an involution. Finally, we discuss the quantum collision resistance of double-block-length compression functions using a block cipher.
Shi BAO Xiaoyan SONG Xufei ZHUANG Min LU Gao LE
Images with rich color information are an important source of information that people obtain from the objective world. Occasionally, it is difficult for people with red-green color vision deficiencies to obtain color information from color images. We propose a method of color correction for dichromats based on the physiological characteristics of dichromats, considering hue information. First, the hue loss of color pairs under normal color vision was defined, an objective function was constructed on its basis, and the resultant image was obtained by minimizing it. Finally, the effectiveness of the proposed method is verified through comparison tests. Red-green color vision deficient people fail to distinguish between partial red and green colors. When the red and green connecting lines are parallel to the a* axis of CIE L*a*b*, red and green perception defectives cannot distinguish the color pair, but can distinguish the color pair parallel to the b* axis. Therefore, when two colors are parallel to the a* axis, their color correction yields good results. When color correction is performed on a color, the hue loss between the two colors under normal color vision is supplemented with b* so that red-green color vision-deficient individuals can distinguish the color difference between the color pairs. The magnitude of the correction is greatest when the connecting lines of the color pairs are parallel to the a* axis, and no color correction is applied when the connecting lines are parallel to the b* axis. The objective evaluation results show that the method achieves a higher score, indicating that the proposed method can maintain the naturalness of the image while reducing confusing colors.
Hongtian ZHAO Hua YANG Shibao ZHENG
Minutiae pattern extraction plays a crucial role in fingerprint registration and identification for electronic applications. However, the extraction accuracy is seriously compromised by the presence of contaminated ridge lines and complex background scenarios. General image processing-based methods, which rely on many prior hypotheses, fail to effectively handle minutiae extraction in complex scenarios. Previous works have shown that CNN-based methods can perform well in object detection tasks. However, the deep neural networks (DNNs)-based methods are restricted by the limitation of public labeled datasets due to legitimate privacy concerns. To address these challenges comprehensively, this paper presents a fully automated minutiae extraction method leveraging DNNs. Firstly, we create a fingerprint minutiae dataset using a semi-automated minutiae annotation algorithm. Subsequently, we propose a minutiae extraction model based on Residual Networks (Resnet) that enables end-to-end prediction of minutiae. Moreover, we introduce a novel non-maximal suppression (NMS) procedure, guided by the Generalized Intersection over Union (GIoU) metric, during the inference phase to effectively handle outliers. Experimental evaluations conducted on the NIST SD4 and FVC 2004 databases demonstrate the superiority of the proposed method over existing state-of-the-art minutiae extraction approaches.
Feng WEN Haixin HUANG Xiangyang YIN Junguang MA Xiaojie HU
Multi-object tracking (MOT) algorithms are typically classified as one-shot or two-step algorithms. The one-shot MOT algorithm is widely studied and applied due to its fast inference speed. However, one-shot algorithms include two sub-tasks of detection and re-ID, which have conflicting directions for model optimization, thus limiting tracking performance. Additionally, MOT algorithms often suffer from serious ID switching issues, which can negatively affect the tracking effect. To address these challenges, this study proposes the DETrack algorithm, which consists of feature decomposition and feature enhancement modules. The feature decomposition module can effectively exploit the differences and correlations of different tasks to solve the conflict problem. Moreover, it can effectively mitigate the competition between the detection and re-ID tasks, while simultaneously enhancing their cooperation. The feature enhancement module can improve feature quality and alleviate the problem of target ID switching. Experimental results demonstrate that DETrack has achieved improvements in multi-object tracking performance, while reducing the number of ID switching. The designed method of feature decomposition and feature enhancement can significantly enhance target tracking effectiveness.
Radar emitter identification (REI) is a crucial function of electronic radar warfare support systems. The challenge emphasizes identifying and locating unique transmitters, avoiding potential threats, and preparing countermeasures. Due to the remarkable effectiveness of deep learning (DL) in uncovering latent features within data and performing classifications, deep neural networks (DNNs) have seen widespread application in radar emitter identification (REI). In many real-world scenarios, obtaining a large number of annotated radar transmitter samples for training identification models is essential yet challenging. Given the issues of insufficient labeled datasets and abundant unlabeled training datasets, we propose a novel REI method based on a semi-supervised learning (SSL) framework with virtual adversarial training (VAT). Specifically, two objective functions are designed to extract the semantic features of radar signals: computing cross-entropy loss for labeled samples and virtual adversarial training loss for all samples. Additionally, a pseudo-labeling approach is employed for unlabeled samples. The proposed VAT-based SS-REI method is evaluated on a radar dataset. Simulation results indicate that the proposed VAT-based SS-REI method outperforms the latest SS-REI method in recognition performance.
Feng WANG Xiangyu WEN Lisheng LI Yan WEN Shidong ZHANG Yang LIU
The rapid advancement of cloud-edge-end collaboration offers a feasible solution to realize low-delay and low-energy-consumption data processing for internet of things (IoT)-based smart distribution grid. The major concern of cloud-edge-end collaboration lies on resource management. However, the joint optimization of heterogeneous resources involves multiple timescales, and the optimization decisions of different timescales are intertwined. In addition, burst electromagnetic interference will affect the channel environment of the distribution grid, leading to inaccuracies in optimization decisions, which can result in negative influences such as slow convergence and strong fluctuations. Hence, we propose a cloud-edge-end collaborative multi-timescale multi-service resource management algorithm. Large-timescale device scheduling is optimized by sliding window pricing matching, which enables accurate matching estimation and effective conflict elimination. Small-timescale compression level selection and power control are jointly optimized by disturbance-robust upper confidence bound (UCB), which perceives the presence of electromagnetic interference and adjusts exploration tendency for convergence improvement. Simulation outcomes illustrate the excellent performance of the proposed algorithm.