The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] fusion(253hit)

1-20hit(253hit)

  • MISpeller: Multimodal Information Enhancement for Chinese Spelling Correction Open Access

    Jiakai LI  Jianyong DUAN  Hao WANG  Li HE  Qing ZHANG  

     
    PAPER-Natural Language Processing

      Pubricized:
    2024/06/07
      Vol:
    E107-D No:10
      Page(s):
    1342-1352

    Chinese spelling correction is a foundational task in natural language processing that aims to detect and correct spelling errors in text. Most spelling corrections in Chinese used multimodal information to model the relationship between incorrect and correct characters. However, feature information mismatch occured during fusion result from the different sources of features, causing the importance relationships between different modalities to be ignored, which in turn restricted the model from learning in an efficient manner. To this end, this paper proposes a multimodal language model-based Chinese spelling corrector, named as MISpeller. The method, based on ChineseBERT as the basic model, allows the comprehensive capture and fusion of character semantic information, phonetic information and graphic information in a single model without the need to construct additional neural networks, and realises the phenomenon of unequal fusion of multi-feature information. In addition, in order to solve the overcorrection issues, the replication mechanism is further introduced, and the replication factor is used as the dynamic weight to efficiently fuse the multimodal information. The model is able to control the proportion of original characters and predicted characters according to different input texts, and it can learn more specifically where errors occur. Experiments conducted on the SIGHAN benchmark show that the proposed model achieves the state-of-the-art performance of the F1 score at the correction level by an average of 4.36%, which validates the effectiveness of the model.

  • REM-CiM: Attentional RGB-Event Fusion Multi-Modal Analog CiM for Area/Energy-Efficient Edge Object Detection during Both Day and Night Open Access

    Yuya ICHIKAWA  Ayumu YAMADA  Naoko MISAWA  Chihiro MATSUI  Ken TAKEUCHI  

     
    PAPER

      Pubricized:
    2024/04/09
      Vol:
    E107-C No:10
      Page(s):
    426-435

    Integrating RGB and event sensors improves object detection accuracy, especially during the night, due to the high-dynamic range of event camera. However, introducing an event sensor leads to an increase in computational resources, which makes the implementation of RGB-event fusion multi-modal AI to CiM difficult. To tackle this issue, this paper proposes RGB-Event fusion Multi-modal analog Computation-in-Memory (CiM), called REM-CiM, for multi-modal edge object detection AI. In REM-CiM, two proposals about multi-modal AI algorithms and circuit implementation are co-designed. First, Memory capacity-Efficient Attentional Feature Pyramid Network (MEA-FPN), the model architecture for RGB-event fusion analog CiM, is proposed for parameter-efficient RGB-event fusion. Convolution-less bi-directional calibration (C-BDC) in MEA-FPN extracts important features of each modality with attention modules, while reducing the number of weight parameters by removing large convolutional operations from conventional BDC. Proposed MEA-FPN w/ C-BDC achieves a 76% reduction of parameters while maintaining mean Average Precision (mAP) degradation to < 2.3% during both day and night, compared with Attentional FPN fusion (A-FPN), a conventional BDC-adopted FPN fusion. Second, the low-bit quantization with clipping (LQC) is proposed to reduce area/energy. Proposed REM-CiM with MEA-FPN and LQC achieves almost the same memory cells, 21% less ADC area, 24% less ADC energy and 0.17% higher mAP than conventional FPN fusion CiM without LQC.

  • Reinforced Voxel-RCNN: An Efficient 3D Object Detection Method Based on Feature Aggregation Open Access

    Jia-ji JIANG  Hai-bin WAN  Hong-min SUN  Tuan-fa QIN  Zheng-qiang WANG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2024/04/24
      Vol:
    E107-D No:9
      Page(s):
    1228-1238

    In this paper, the Towards High Performance Voxel-based 3D Object Detection (Voxel-RCNN) three-dimensional (3D) point cloud object detection model is used as the benchmark network. Aiming at the problems existing in the current mainstream 3D point cloud voxelization methods, such as the backbone and the lack of feature expression ability under the bird’s-eye view (BEV), a high-performance voxel-based 3D object detection network (Reinforced Voxel-RCNN) is proposed. Firstly, a 3D feature extraction module based on the integration of inverted residual convolutional network and weight normalization is designed on the 3D backbone. This module can not only well retain more point cloud feature information, enhance the information interaction between convolutional layers, but also improve the feature extraction ability of the backbone network. Secondly, a spatial feature-semantic fusion module based on spatial and channel attention is proposed from a BEV perspective. The mixed use of channel features and semantic features further improves the network’s ability to express point cloud features. In the comparison of experimental results on the public dataset KITTI, the experimental results of this paper are better than many voxel-based methods. Compared with the baseline network, the 3D average accuracy and BEV average accuracy on the three categories of Car, Cyclist, and Pedestrians are improved. Among them, in the 3D average accuracy, the improvement rate of Car category is 0.23%, Cyclist is 0.78%, and Pedestrians is 2.08%. In the context of BEV average accuracy, enhancements are observed: 0.32% for the Car category, 0.99% for Cyclist, and 2.38% for Pedestrians. The findings demonstrate that the algorithm enhancement introduced in this study effectively enhances the accuracy of target category detection.

  • Remote Sensing Image Dehazing Using Multi-Scale Gated Attention for Flight Simulator Open Access

    Qi LIU  Bo WANG  Shihan TAN  Shurong ZOU  Wenyi GE  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2024/05/14
      Vol:
    E107-D No:9
      Page(s):
    1206-1218

    For flight simulators, it is crucial to create three-dimensional terrain using clear remote sensing images. However, due to haze and other contributing variables, the obtained remote sensing images typically have low contrast and blurry features. In order to build a flight simulator visual system, we propose a deep learning-based dehaze model for remote sensing images dehazing. An encoder-decoder architecture is proposed that consists of a multiscale fusion module and a gated large kernel convolutional attention module. This architecture can fuse multi-resolution global and local semantic features and can adaptively extract image features under complex terrain. The experimental results demonstrate that, with good generality and application, the model outperforms existing comparison techniques and achieves high-confidence dehazing in remote sensing images with a variety of haze concentrations, multi-complex terrains, and multi-spatial resolutions.

  • Conflict Management Method Based on a New Belief Divergence in Evidence Theory Open Access

    Zhu YIN  Xiaojian MA  Hang WANG  

     
    PAPER-Office Information Systems, e-Business Modeling

      Pubricized:
    2024/03/01
      Vol:
    E107-D No:7
      Page(s):
    857-868

    Highly conflicting evidence that may lead to the counter-intuitive results is one of the challenges for information fusion in Dempster-Shafer evidence theory. To deal with this issue, evidence conflict is investigated based on belief divergence measuring the discrepancy between evidence. In this paper, the pignistic probability transform belief χ2 divergence, named as BBχ2 divergence, is proposed. By introducing the pignistic probability transform, the proposed BBχ2 divergence can accurately quantify the difference between evidence with the consideration of multi-element sets. Compared with a few belief divergences, the novel divergence has more precision. Based on this advantageous divergence, a new multi-source information fusion method is devised. The proposed method considers both credibility weights and information volume weights to determine the overall weight of each evidence. Eventually, the proposed method is applied in target recognition and fault diagnosis, in which comparative analysis indicates that the proposed method can realize the highest accuracy for managing evidence conflict.

  • A Retinal Vessel Segmentation Network Fusing Cross-Modal Features Open Access

    Xiaosheng YU  Jianning CHI  Ming XU  

     
    LETTER-Image

      Pubricized:
    2023/11/01
      Vol:
    E107-A No:7
      Page(s):
    1071-1075

    Accurate segmentation of fundus vessel structure can effectively assist doctors in diagnosing eye diseases. In this paper, we propose a fundus blood vessel segmentation network combined with cross-modal features and verify our method on the public data set OCTA-500. Experimental results show that our method has high accuracy and robustness.

  • Prohibited Item Detection Within X-Ray Security Inspection Images Based on an Improved Cascade Network Open Access

    Qingqi ZHANG  Xiaoan BAO  Ren WU  Mitsuru NAKATA  Qi-Wei GE  

     
    PAPER

      Pubricized:
    2024/01/16
      Vol:
    E107-A No:5
      Page(s):
    813-824

    Automatic detection of prohibited items is vital in helping security staff be more efficient while improving the public safety index. However, prohibited item detection within X-ray security inspection images is limited by various factors, including the imbalance distribution of categories, diversity of prohibited item scales, and overlap between items. In this paper, we propose to leverage the Poisson blending algorithm with the Canny edge operator to alleviate the imbalance distribution of categories maximally in the X-ray images dataset. Based on this, we improve the cascade network to deal with the other two difficulties. To address the prohibited scale diversity problem, we propose the Re-BiFPN feature fusion method, which includes a coordinate attention atrous spatial pyramid pooling (CA-ASPP) module and a recursive connection. The CA-ASPP module can implicitly extract direction-aware and position-aware information from the feature map. The recursive connection feeds the CA-ASPP module processed multi-scale feature map to the bottom-up backbone layer for further multi-scale feature extraction. In addition, a Rep-CIoU loss function is designed to address the overlapping problem in X-ray images. Extensive experimental results demonstrate that our method can successfully identify ten types of prohibited items, such as Knives, Scissors, Pressure, etc. and achieves 83.4% of mAP, which is 3.8% superior to the original cascade network. Moreover, our method outperforms other mainstream methods by a significant margin.

  • Infrared and Visible Image Fusion via Hybrid Variational Model Open Access

    Zhengwei XIA  Yun LIU  Xiaoyun WANG  Feiyun ZHANG  Rui CHEN  Weiwei JIANG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/12/11
      Vol:
    E107-D No:4
      Page(s):
    569-573

    Infrared and visible image fusion can combine the thermal radiation information and the textures to provide a high-quality fused image. In this letter, we propose a hybrid variational fusion model to achieve this end. Specifically, an ℓ0 term is adopted to preserve the highlighted targets with salient gradient variation in the infrared image, an ℓ1 term is used to suppress the noise in the fused image and an ℓ2 term is employed to keep the textures of the visible image. Experimental results demonstrate the superiority of the proposed variational model and our results have more sharpen textures with less noise.

  • An Intra- and Inter-Emotion Transformer-Based Fusion Model with Homogeneous and Diverse Constraints Using Multi-Emotional Audiovisual Features for Depression Detection

    Shiyu TENG  Jiaqing LIU  Yue HUANG  Shurong CHAI  Tomoko TATEYAMA  Xinyin HUANG  Lanfen LIN  Yen-Wei CHEN  

     
    PAPER

      Pubricized:
    2023/12/15
      Vol:
    E107-D No:3
      Page(s):
    342-353

    Depression is a prevalent mental disorder affecting a significant portion of the global population, leading to considerable disability and contributing to the overall burden of disease. Consequently, designing efficient and robust automated methods for depression detection has become imperative. Recently, deep learning methods, especially multimodal fusion methods, have been increasingly used in computer-aided depression detection. Importantly, individuals with depression and those without respond differently to various emotional stimuli, providing valuable information for detecting depression. Building on these observations, we propose an intra- and inter-emotional stimulus transformer-based fusion model to effectively extract depression-related features. The intra-emotional stimulus fusion framework aims to prioritize different modalities, capitalizing on their diversity and complementarity for depression detection. The inter-emotional stimulus model maps each emotional stimulus onto both invariant and specific subspaces using individual invariant and specific encoders. The emotional stimulus-invariant subspace facilitates efficient information sharing and integration across different emotional stimulus categories, while the emotional stimulus specific subspace seeks to enhance diversity and capture the distinct characteristics of individual emotional stimulus categories. Our proposed intra- and inter-emotional stimulus fusion model effectively integrates multimodal data under various emotional stimulus categories, providing a comprehensive representation that allows accurate task predictions in the context of depression detection. We evaluate the proposed model on the Chinese Soochow University students dataset, and the results outperform state-of-the-art models in terms of concordance correlation coefficient (CCC), root mean squared error (RMSE) and accuracy.

  • Content Search Method Utilizing the Metadata Matching Characteristics of Both Spatio-Temporal Content and User Request in the IoT Era

    Shota AKIYOSHI  Yuzo TAENAKA  Kazuya TSUKAMOTO  Myung LEE  

     
    PAPER-Network System

      Pubricized:
    2023/10/06
      Vol:
    E107-B No:1
      Page(s):
    163-172

    Cross-domain data fusion is becoming a key driver in the growth of numerous and diverse applications in the Internet of Things (IoT) era. We have proposed the concept of a new information platform, Geo-Centric Information Platform (GCIP), that enables IoT data fusion based on geolocation, i.e., produces spatio-temporal content (STC), and then provides the STC to users. In this environment, users cannot know in advance “when,” “where,” or “what type” of STC is being generated because the type and timing of STC generation vary dynamically with the diversity of IoT data generated in each geographical area. This makes it difficult to directly search for a specific STC requested by the user using the content identifier (domain name of URI or content name). To solve this problem, a new content discovery method that does not directly specify content identifiers is needed while taking into account (1) spatial and (2) temporal constraints. In our previous study, we proposed a content discovery method that considers only spatial constraints and did not consider temporal constraints. This paper proposes a new content discovery method that matches user requests with content metadata (topic) characteristics while taking into account spatial and temporal constraints. Simulation results show that the proposed method successfully discovers appropriate STC in response to a user request.

  • A Note on the Confusion Coefficient of Boolean Functions

    Yu ZHOU  Jianyong HU  Xudong MIAO  Xiaoni DU  

     
    PAPER-Cryptography and Information Security

      Pubricized:
    2023/05/24
      Vol:
    E106-A No:12
      Page(s):
    1525-1530

    Low confusion coefficient values can make side-channel attacks harder for vector Boolean functions in Block cipher. In this paper, we give new results of confusion coefficient for f ⊞ g, f ⊡ g, f ⊕ g and fg for different Boolean functions f and g, respectively. And we deduce a relationship on the sum-of-squares of the confusion coefficient between one n-variable function and two (n - 1)-variable decomposition functions. Finally, we find that the confusion coefficient of vector Boolean functions is affine invariant.

  • Deep Unrolling of Non-Linear Diffusion with Extended Morphological Laplacian

    Gouki OKADA  Makoto NAKASHIZUKA  

     
    PAPER-Image

      Pubricized:
    2023/07/21
      Vol:
    E106-A No:11
      Page(s):
    1395-1405

    This paper presents a deep network based on unrolling the diffusion process with the morphological Laplacian. The diffusion process is an iterative algorithm that can solve the diffusion equation and represents time evolution with Laplacian. The diffusion process is applied to smoothing of images and has been extended with non-linear operators for various image processing tasks. In this study, we introduce the morphological Laplacian to the basic diffusion process and unwrap to deep networks. The morphological filters are non-linear operators with parameters that are referred to as structuring elements. The discrete Laplacian can be approximated with the morphological filters without multiplications. Owing to the non-linearity of the morphological filter with trainable structuring elements, the training uses error back propagation and the network of the morphology can be adapted to specific image processing applications. We introduce two extensions of the morphological Laplacian for deep networks. Since the morphological filters are realized with addition, max, and min, the error caused by the limited bit-length is not amplified. Consequently, the morphological parts of the network are implemented in unsigned 8-bit integer with single instruction multiple data set (SIMD) to achieve fast computation on small devices. We applied the proposed network to image completion and Gaussian denoising. The results and computational time are compared with other denoising algorithm and deep networks.

  • Inverse Heat Dissipation Model for Medical Image Segmentation

    Yu KASHIHARA  Takashi MATSUBARA  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/08/22
      Vol:
    E106-D No:11
      Page(s):
    1930-1934

    The diffusion model has achieved success in generating and editing high-quality images because of its ability to produce fine details. Its superior generation ability has the potential to facilitate more detailed segmentation. This study presents a novel approach to segmentation tasks using an inverse heat dissipation model, a kind of diffusion-based models. The proposed method involves generating a mask that gradually shrinks to fit the shape of the desired segmentation region. We comprehensively evaluated the proposed method using multiple datasets under varying conditions. The results show that the proposed method outperforms existing methods and provides a more detailed segmentation.

  • Fusion-Based Edge and Color Recovery Using Weighted Near-Infrared Image and Color Transmission Maps for Robust Haze Removal

    Onhi KATO  Akira KUBOTA  

     
    PAPER

      Pubricized:
    2023/05/23
      Vol:
    E106-D No:10
      Page(s):
    1661-1672

    Various haze removal methods based on the atmospheric scattering model have been presented in recent years. Most methods have targeted strong haze images where light is scattered equally in all color channels. This paper presents a haze removal method using near-infrared (NIR) images for relatively weak haze images. In order to recover the lost edges, the presented method first extracts edges from an appropriately weighted NIR image and fuses it with the color image. By introducing a wavelength-dependent scattering model, our method then estimates the transmission map for each color channel and recovers the color more naturally from the edge-recovered image. Finally, the edge-recovered and the color-recovered images are blended. In this blending process, the regions with high lightness, such as sky and clouds, where unnatural color shifts are likely to occur, are effectively estimated, and the optimal weighting map is obtained. Our qualitative and quantitative evaluations using 59 pairs of color and NIR images demonstrated that our method can recover edges and colors more naturally in weak haze images than conventional methods.

  • Parameter Selection and Radar Fusion for Tracking in Roadside Units

    Kuan-Cheng YEH  Chia-Hsing YANG  Ming-Chun LEE  Ta-Sung LEE  Hsiang-Hsuan HUNG  

     
    PAPER-Sensing

      Pubricized:
    2023/03/03
      Vol:
    E106-B No:9
      Page(s):
    855-863

    To enhance safety and efficiency in the traffic environment, developing intelligent transportation systems (ITSs) is of paramount importance. In ITSs, roadside units (RSUs) are critical components that enable the environment awareness and connectivity via using radar sensing and communications. In this paper, we focus on RSUs with multiple radar systems. Specifically, we propose a parameter selection method of multiple radar systems to enhance the overall sensing performance. Furthermore, since different radars provide different sensing and tracking results, to benefit from multiple radars, we propose fusion algorithms to integrate the tracking results of different radars. We use two commercial frequency-modulated continuous wave (FMCW) radars to conduct experiments at Hsinchu city in Taiwan. The experimental results validate that our proposed approaches can improve the overall sensing performance.

  • Single Image Dehazing Based on Sky Area Segmentation and Image Fusion

    Xiangyang CHEN  Haiyue LI  Chuan LI  Weiwei JIANG  Hao ZHOU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/04/24
      Vol:
    E106-D No:7
      Page(s):
    1249-1253

    Since the dark channel prior (DCP)-based dehazing method is ineffective in the sky area and will cause the problem of too dark and color distortion of the image, we propose a novel dehazing method based on sky area segmentation and image fusion. We first segment the image according to the characteristics of the sky area and non-sky area of the image, then estimate the atmospheric light and transmission map according to the DCP and correct them, and then fuse the original image after the contrast adaptive histogram equalization to improve the details information of the image. Experiments illustrate that our method performs well in dehazing and can reduce image distortion.

  • Time-Series Prediction Based on Double Pyramid Bidirectional Feature Fusion Mechanism

    Na WANG  Xianglian ZHAO  

     
    PAPER-Digital Signal Processing

      Pubricized:
    2022/12/20
      Vol:
    E106-A No:6
      Page(s):
    886-895

    The application of time-series prediction is very extensive, and it is an important problem across many fields, such as stock prediction, sales prediction, and loan prediction and so on, which play a great value in production and life. It requires that the model can effectively capture the long-term feature dependence between the output and input. Recent studies show that Transformer can improve the prediction ability of time-series. However, Transformer has some problems that make it unable to be directly applied to time-series prediction, such as: (1) Local agnosticism: Self-attention in Transformer is not sensitive to short-term feature dependence, which leads to model anomalies in time-series; (2) Memory bottleneck: The spatial complexity of regular transformation increases twice with the sequence length, making direct modeling of long time-series infeasible. In order to solve these problems, this paper designs an efficient model for long time-series prediction. It is a double pyramid bidirectional feature fusion mechanism network with parallel Temporal Convolution Network (TCN) and FastFormer. This network structure can combine the time series fine-grained information captured by the Temporal Convolution Network with the global interactive information captured by FastFormer, it can well handle the time series prediction problem.

  • Generation of Reaction-Diffusion-Pattern-Like Images with Partially Variable Size

    Toru HIRAOKA  

     
    LETTER-Image

      Pubricized:
    2022/12/08
      Vol:
    E106-A No:6
      Page(s):
    957-961

    We propose a non-photorealistic rendering method to automatically generate reaction-diffusion-pattern-like images from photographic images. The proposed method uses smoothing filter with a circular window, and changes the size of the circular window depending on the position in photographic images. By partially changing the size of the circular window, the size of reaction-diffusion patterns can be changed partially. To verify the effectiveness of the proposed method, experiments were conducted to apply the proposed method to various photographic images.

  • Cluster Structure of Online Users Generated from Interaction Between Fake News and Corrections Open Access

    Masaki AIDA  Takumi SAKIYAMA  Ayako HASHIZUME  Chisa TAKANO  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2022/11/21
      Vol:
    E106-B No:5
      Page(s):
    392-401

    The problem caused by fake news continues to worsen in today's online social networks. Intuitively, it seems effective to issue corrections as a countermeasure. However, corrections can, ironically, strengthen attention to fake news, which worsens the situation. This paper proposes a model for describing the interaction between fake news and the corrections as a reaction-diffusion system; this yields the mechanism by which corrections increase attention to fake news. In this model, the emergence of groups of users who believe in fake news is understood as a Turing pattern that appears in the activator-inhibitor model. Numerical calculations show that even if the network structure has no spatial bias, the interaction between fake news and the corrections creates groups that are strongly interested in discussing fake news. Also, we propose and evaluate a basic strategy to counter fake news.

  • MolHF: Molecular Heterogeneous Attributes Fusion for Drug-Target Affinity Prediction on Heterogeneity

    Runze WANG  Zehua ZHANG  Yueqin ZHANG  Zhongyuan JIANG  Shilin SUN  Guixiang MA  

     
    PAPER-Smart Healthcare

      Pubricized:
    2022/05/31
      Vol:
    E106-D No:5
      Page(s):
    697-706

    Recent studies in protein structure prediction such as AlphaFold have enabled deep learning to achieve great attention on the Drug-Target Affinity (DTA) task. Most works are dedicated to embed single molecular property and homogeneous information, ignoring the diverse heterogeneous information gains that are contained in the molecules and interactions. Motivated by this, we propose an end-to-end deep learning framework to perform Molecular Heterogeneous features Fusion (MolHF) for DTA prediction on heterogeneity. To address the challenges that biochemical attributes locates in different heterogeneous spaces, we design a Molecular Heterogeneous Information Learning module with multi-strategy learning. Especially, Molecular Heterogeneous Attention Fusion module is present to obtain the gains of molecular heterogeneous features. With these, the diversity of molecular structure information for drugs can be extracted. Extensive experiments on two benchmark datasets show that our method outperforms the baselines in all four metrics. Ablation studies validate the effect of attentive fusion and multi-group of drug heterogeneous features. Visual presentations demonstrate the impact of protein embedding level and the model ability of fitting data. In summary, the diverse gains brought by heterogeneous information contribute to drug-target affinity prediction.

1-20hit(253hit)