Multi-focus image fusion involves combining partially focused images of the same scene to create an all-in-focus image. Aiming at the problems of existing multi-focus image fusion algorithms that the benchmark image is difficult to obtain and the convolutional neural network focuses too much on the local region, a fusion algorithm that combines local and global feature encoding is proposed. Initially, we devise two self-supervised image reconstruction tasks and train an encoder-decoder network through multi-task learning. Subsequently, within the encoder, we merge the dense connection module with the PS-ViT module, enabling the network to utilize local and global information during feature extraction. Finally, to enhance the overall efficiency of the model, distinct loss functions are applied to each task. To preserve the more robust features from the original images, spatial frequency is employed during the fusion stage to obtain the feature map of the fused image. Experimental results demonstrate that, in comparison to twelve other prominent algorithms, our method exhibits good fusion performance in objective evaluation. Ten of the selected twelve evaluation metrics show an improvement of more than 0.28%. Additionally, it presents superior visual effects subjectively.
Yuya ICHIKAWA Ayumu YAMADA Naoko MISAWA Chihiro MATSUI Ken TAKEUCHI
Integrating RGB and event sensors improves object detection accuracy, especially during the night, due to the high-dynamic range of event camera. However, introducing an event sensor leads to an increase in computational resources, which makes the implementation of RGB-event fusion multi-modal AI to CiM difficult. To tackle this issue, this paper proposes RGB-Event fusion Multi-modal analog Computation-in-Memory (CiM), called REM-CiM, for multi-modal edge object detection AI. In REM-CiM, two proposals about multi-modal AI algorithms and circuit implementation are co-designed. First, Memory capacity-Efficient Attentional Feature Pyramid Network (MEA-FPN), the model architecture for RGB-event fusion analog CiM, is proposed for parameter-efficient RGB-event fusion. Convolution-less bi-directional calibration (C-BDC) in MEA-FPN extracts important features of each modality with attention modules, while reducing the number of weight parameters by removing large convolutional operations from conventional BDC. Proposed MEA-FPN w/ C-BDC achieves a 76% reduction of parameters while maintaining mean Average Precision (mAP) degradation to < 2.3% during both day and night, compared with Attentional FPN fusion (A-FPN), a conventional BDC-adopted FPN fusion. Second, the low-bit quantization with clipping (LQC) is proposed to reduce area/energy. Proposed REM-CiM with MEA-FPN and LQC achieves almost the same memory cells, 21% less ADC area, 24% less ADC energy and 0.17% higher mAP than conventional FPN fusion CiM without LQC.
Jiakai LI Jianyong DUAN Hao WANG Li HE Qing ZHANG
Chinese spelling correction is a foundational task in natural language processing that aims to detect and correct spelling errors in text. Most spelling corrections in Chinese used multimodal information to model the relationship between incorrect and correct characters. However, feature information mismatch occured during fusion result from the different sources of features, causing the importance relationships between different modalities to be ignored, which in turn restricted the model from learning in an efficient manner. To this end, this paper proposes a multimodal language model-based Chinese spelling corrector, named as MISpeller. The method, based on ChineseBERT as the basic model, allows the comprehensive capture and fusion of character semantic information, phonetic information and graphic information in a single model without the need to construct additional neural networks, and realises the phenomenon of unequal fusion of multi-feature information. In addition, in order to solve the overcorrection issues, the replication mechanism is further introduced, and the replication factor is used as the dynamic weight to efficiently fuse the multimodal information. The model is able to control the proportion of original characters and predicted characters according to different input texts, and it can learn more specifically where errors occur. Experiments conducted on the SIGHAN benchmark show that the proposed model achieves the state-of-the-art performance of the F1 score at the correction level by an average of 4.36%, which validates the effectiveness of the model.
Qi LIU Bo WANG Shihan TAN Shurong ZOU Wenyi GE
For flight simulators, it is crucial to create three-dimensional terrain using clear remote sensing images. However, due to haze and other contributing variables, the obtained remote sensing images typically have low contrast and blurry features. In order to build a flight simulator visual system, we propose a deep learning-based dehaze model for remote sensing images dehazing. An encoder-decoder architecture is proposed that consists of a multiscale fusion module and a gated large kernel convolutional attention module. This architecture can fuse multi-resolution global and local semantic features and can adaptively extract image features under complex terrain. The experimental results demonstrate that, with good generality and application, the model outperforms existing comparison techniques and achieves high-confidence dehazing in remote sensing images with a variety of haze concentrations, multi-complex terrains, and multi-spatial resolutions.
Jia-ji JIANG Hai-bin WAN Hong-min SUN Tuan-fa QIN Zheng-qiang WANG
In this paper, the Towards High Performance Voxel-based 3D Object Detection (Voxel-RCNN) three-dimensional (3D) point cloud object detection model is used as the benchmark network. Aiming at the problems existing in the current mainstream 3D point cloud voxelization methods, such as the backbone and the lack of feature expression ability under the bird’s-eye view (BEV), a high-performance voxel-based 3D object detection network (Reinforced Voxel-RCNN) is proposed. Firstly, a 3D feature extraction module based on the integration of inverted residual convolutional network and weight normalization is designed on the 3D backbone. This module can not only well retain more point cloud feature information, enhance the information interaction between convolutional layers, but also improve the feature extraction ability of the backbone network. Secondly, a spatial feature-semantic fusion module based on spatial and channel attention is proposed from a BEV perspective. The mixed use of channel features and semantic features further improves the network’s ability to express point cloud features. In the comparison of experimental results on the public dataset KITTI, the experimental results of this paper are better than many voxel-based methods. Compared with the baseline network, the 3D average accuracy and BEV average accuracy on the three categories of Car, Cyclist, and Pedestrians are improved. Among them, in the 3D average accuracy, the improvement rate of Car category is 0.23%, Cyclist is 0.78%, and Pedestrians is 2.08%. In the context of BEV average accuracy, enhancements are observed: 0.32% for the Car category, 0.99% for Cyclist, and 2.38% for Pedestrians. The findings demonstrate that the algorithm enhancement introduced in this study effectively enhances the accuracy of target category detection.
Xiaosheng YU Jianning CHI Ming XU
Accurate segmentation of fundus vessel structure can effectively assist doctors in diagnosing eye diseases. In this paper, we propose a fundus blood vessel segmentation network combined with cross-modal features and verify our method on the public data set OCTA-500. Experimental results show that our method has high accuracy and robustness.
Highly conflicting evidence that may lead to the counter-intuitive results is one of the challenges for information fusion in Dempster-Shafer evidence theory. To deal with this issue, evidence conflict is investigated based on belief divergence measuring the discrepancy between evidence. In this paper, the pignistic probability transform belief χ2 divergence, named as BBχ2 divergence, is proposed. By introducing the pignistic probability transform, the proposed BBχ2 divergence can accurately quantify the difference between evidence with the consideration of multi-element sets. Compared with a few belief divergences, the novel divergence has more precision. Based on this advantageous divergence, a new multi-source information fusion method is devised. The proposed method considers both credibility weights and information volume weights to determine the overall weight of each evidence. Eventually, the proposed method is applied in target recognition and fault diagnosis, in which comparative analysis indicates that the proposed method can realize the highest accuracy for managing evidence conflict.
Qingqi ZHANG Xiaoan BAO Ren WU Mitsuru NAKATA Qi-Wei GE
Automatic detection of prohibited items is vital in helping security staff be more efficient while improving the public safety index. However, prohibited item detection within X-ray security inspection images is limited by various factors, including the imbalance distribution of categories, diversity of prohibited item scales, and overlap between items. In this paper, we propose to leverage the Poisson blending algorithm with the Canny edge operator to alleviate the imbalance distribution of categories maximally in the X-ray images dataset. Based on this, we improve the cascade network to deal with the other two difficulties. To address the prohibited scale diversity problem, we propose the Re-BiFPN feature fusion method, which includes a coordinate attention atrous spatial pyramid pooling (CA-ASPP) module and a recursive connection. The CA-ASPP module can implicitly extract direction-aware and position-aware information from the feature map. The recursive connection feeds the CA-ASPP module processed multi-scale feature map to the bottom-up backbone layer for further multi-scale feature extraction. In addition, a Rep-CIoU loss function is designed to address the overlapping problem in X-ray images. Extensive experimental results demonstrate that our method can successfully identify ten types of prohibited items, such as Knives, Scissors, Pressure, etc. and achieves 83.4% of mAP, which is 3.8% superior to the original cascade network. Moreover, our method outperforms other mainstream methods by a significant margin.
Zhengwei XIA Yun LIU Xiaoyun WANG Feiyun ZHANG Rui CHEN Weiwei JIANG
Infrared and visible image fusion can combine the thermal radiation information and the textures to provide a high-quality fused image. In this letter, we propose a hybrid variational fusion model to achieve this end. Specifically, an ℓ0 term is adopted to preserve the highlighted targets with salient gradient variation in the infrared image, an ℓ1 term is used to suppress the noise in the fused image and an ℓ2 term is employed to keep the textures of the visible image. Experimental results demonstrate the superiority of the proposed variational model and our results have more sharpen textures with less noise.
Shiyu TENG Jiaqing LIU Yue HUANG Shurong CHAI Tomoko TATEYAMA Xinyin HUANG Lanfen LIN Yen-Wei CHEN
Depression is a prevalent mental disorder affecting a significant portion of the global population, leading to considerable disability and contributing to the overall burden of disease. Consequently, designing efficient and robust automated methods for depression detection has become imperative. Recently, deep learning methods, especially multimodal fusion methods, have been increasingly used in computer-aided depression detection. Importantly, individuals with depression and those without respond differently to various emotional stimuli, providing valuable information for detecting depression. Building on these observations, we propose an intra- and inter-emotional stimulus transformer-based fusion model to effectively extract depression-related features. The intra-emotional stimulus fusion framework aims to prioritize different modalities, capitalizing on their diversity and complementarity for depression detection. The inter-emotional stimulus model maps each emotional stimulus onto both invariant and specific subspaces using individual invariant and specific encoders. The emotional stimulus-invariant subspace facilitates efficient information sharing and integration across different emotional stimulus categories, while the emotional stimulus specific subspace seeks to enhance diversity and capture the distinct characteristics of individual emotional stimulus categories. Our proposed intra- and inter-emotional stimulus fusion model effectively integrates multimodal data under various emotional stimulus categories, providing a comprehensive representation that allows accurate task predictions in the context of depression detection. We evaluate the proposed model on the Chinese Soochow University students dataset, and the results outperform state-of-the-art models in terms of concordance correlation coefficient (CCC), root mean squared error (RMSE) and accuracy.
Shota AKIYOSHI Yuzo TAENAKA Kazuya TSUKAMOTO Myung LEE
Cross-domain data fusion is becoming a key driver in the growth of numerous and diverse applications in the Internet of Things (IoT) era. We have proposed the concept of a new information platform, Geo-Centric Information Platform (GCIP), that enables IoT data fusion based on geolocation, i.e., produces spatio-temporal content (STC), and then provides the STC to users. In this environment, users cannot know in advance “when,” “where,” or “what type” of STC is being generated because the type and timing of STC generation vary dynamically with the diversity of IoT data generated in each geographical area. This makes it difficult to directly search for a specific STC requested by the user using the content identifier (domain name of URI or content name). To solve this problem, a new content discovery method that does not directly specify content identifiers is needed while taking into account (1) spatial and (2) temporal constraints. In our previous study, we proposed a content discovery method that considers only spatial constraints and did not consider temporal constraints. This paper proposes a new content discovery method that matches user requests with content metadata (topic) characteristics while taking into account spatial and temporal constraints. Simulation results show that the proposed method successfully discovers appropriate STC in response to a user request.
Yu ZHOU Jianyong HU Xudong MIAO Xiaoni DU
Low confusion coefficient values can make side-channel attacks harder for vector Boolean functions in Block cipher. In this paper, we give new results of confusion coefficient for f ⊞ g, f ⊡ g, f ⊕ g and fg for different Boolean functions f and g, respectively. And we deduce a relationship on the sum-of-squares of the confusion coefficient between one n-variable function and two (n - 1)-variable decomposition functions. Finally, we find that the confusion coefficient of vector Boolean functions is affine invariant.
Gouki OKADA Makoto NAKASHIZUKA
This paper presents a deep network based on unrolling the diffusion process with the morphological Laplacian. The diffusion process is an iterative algorithm that can solve the diffusion equation and represents time evolution with Laplacian. The diffusion process is applied to smoothing of images and has been extended with non-linear operators for various image processing tasks. In this study, we introduce the morphological Laplacian to the basic diffusion process and unwrap to deep networks. The morphological filters are non-linear operators with parameters that are referred to as structuring elements. The discrete Laplacian can be approximated with the morphological filters without multiplications. Owing to the non-linearity of the morphological filter with trainable structuring elements, the training uses error back propagation and the network of the morphology can be adapted to specific image processing applications. We introduce two extensions of the morphological Laplacian for deep networks. Since the morphological filters are realized with addition, max, and min, the error caused by the limited bit-length is not amplified. Consequently, the morphological parts of the network are implemented in unsigned 8-bit integer with single instruction multiple data set (SIMD) to achieve fast computation on small devices. We applied the proposed network to image completion and Gaussian denoising. The results and computational time are compared with other denoising algorithm and deep networks.
Yu KASHIHARA Takashi MATSUBARA
The diffusion model has achieved success in generating and editing high-quality images because of its ability to produce fine details. Its superior generation ability has the potential to facilitate more detailed segmentation. This study presents a novel approach to segmentation tasks using an inverse heat dissipation model, a kind of diffusion-based models. The proposed method involves generating a mask that gradually shrinks to fit the shape of the desired segmentation region. We comprehensively evaluated the proposed method using multiple datasets under varying conditions. The results show that the proposed method outperforms existing methods and provides a more detailed segmentation.
Various haze removal methods based on the atmospheric scattering model have been presented in recent years. Most methods have targeted strong haze images where light is scattered equally in all color channels. This paper presents a haze removal method using near-infrared (NIR) images for relatively weak haze images. In order to recover the lost edges, the presented method first extracts edges from an appropriately weighted NIR image and fuses it with the color image. By introducing a wavelength-dependent scattering model, our method then estimates the transmission map for each color channel and recovers the color more naturally from the edge-recovered image. Finally, the edge-recovered and the color-recovered images are blended. In this blending process, the regions with high lightness, such as sky and clouds, where unnatural color shifts are likely to occur, are effectively estimated, and the optimal weighting map is obtained. Our qualitative and quantitative evaluations using 59 pairs of color and NIR images demonstrated that our method can recover edges and colors more naturally in weak haze images than conventional methods.
Kuan-Cheng YEH Chia-Hsing YANG Ming-Chun LEE Ta-Sung LEE Hsiang-Hsuan HUNG
To enhance safety and efficiency in the traffic environment, developing intelligent transportation systems (ITSs) is of paramount importance. In ITSs, roadside units (RSUs) are critical components that enable the environment awareness and connectivity via using radar sensing and communications. In this paper, we focus on RSUs with multiple radar systems. Specifically, we propose a parameter selection method of multiple radar systems to enhance the overall sensing performance. Furthermore, since different radars provide different sensing and tracking results, to benefit from multiple radars, we propose fusion algorithms to integrate the tracking results of different radars. We use two commercial frequency-modulated continuous wave (FMCW) radars to conduct experiments at Hsinchu city in Taiwan. The experimental results validate that our proposed approaches can improve the overall sensing performance.
Xiangyang CHEN Haiyue LI Chuan LI Weiwei JIANG Hao ZHOU
Since the dark channel prior (DCP)-based dehazing method is ineffective in the sky area and will cause the problem of too dark and color distortion of the image, we propose a novel dehazing method based on sky area segmentation and image fusion. We first segment the image according to the characteristics of the sky area and non-sky area of the image, then estimate the atmospheric light and transmission map according to the DCP and correct them, and then fuse the original image after the contrast adaptive histogram equalization to improve the details information of the image. Experiments illustrate that our method performs well in dehazing and can reduce image distortion.
The application of time-series prediction is very extensive, and it is an important problem across many fields, such as stock prediction, sales prediction, and loan prediction and so on, which play a great value in production and life. It requires that the model can effectively capture the long-term feature dependence between the output and input. Recent studies show that Transformer can improve the prediction ability of time-series. However, Transformer has some problems that make it unable to be directly applied to time-series prediction, such as: (1) Local agnosticism: Self-attention in Transformer is not sensitive to short-term feature dependence, which leads to model anomalies in time-series; (2) Memory bottleneck: The spatial complexity of regular transformation increases twice with the sequence length, making direct modeling of long time-series infeasible. In order to solve these problems, this paper designs an efficient model for long time-series prediction. It is a double pyramid bidirectional feature fusion mechanism network with parallel Temporal Convolution Network (TCN) and FastFormer. This network structure can combine the time series fine-grained information captured by the Temporal Convolution Network with the global interactive information captured by FastFormer, it can well handle the time series prediction problem.
We propose a non-photorealistic rendering method to automatically generate reaction-diffusion-pattern-like images from photographic images. The proposed method uses smoothing filter with a circular window, and changes the size of the circular window depending on the position in photographic images. By partially changing the size of the circular window, the size of reaction-diffusion patterns can be changed partially. To verify the effectiveness of the proposed method, experiments were conducted to apply the proposed method to various photographic images.
Masaki AIDA Takumi SAKIYAMA Ayako HASHIZUME Chisa TAKANO
The problem caused by fake news continues to worsen in today's online social networks. Intuitively, it seems effective to issue corrections as a countermeasure. However, corrections can, ironically, strengthen attention to fake news, which worsens the situation. This paper proposes a model for describing the interaction between fake news and the corrections as a reaction-diffusion system; this yields the mechanism by which corrections increase attention to fake news. In this model, the emergence of groups of users who believe in fake news is understood as a Turing pattern that appears in the activator-inhibitor model. Numerical calculations show that even if the network structure has no spatial bias, the interaction between fake news and the corrections creates groups that are strongly interested in discussing fake news. Also, we propose and evaluate a basic strategy to counter fake news.