The search functionality is under construction.

Keyword Search Result

[Keyword] attention(111hit)

21-40hit(111hit)

  • MolHF: Molecular Heterogeneous Attributes Fusion for Drug-Target Affinity Prediction on Heterogeneity

    Runze WANG  Zehua ZHANG  Yueqin ZHANG  Zhongyuan JIANG  Shilin SUN  Guixiang MA  

     
    PAPER-Smart Healthcare

      Pubricized:
    2022/05/31
      Vol:
    E106-D No:5
      Page(s):
    697-706

    Recent studies in protein structure prediction such as AlphaFold have enabled deep learning to achieve great attention on the Drug-Target Affinity (DTA) task. Most works are dedicated to embed single molecular property and homogeneous information, ignoring the diverse heterogeneous information gains that are contained in the molecules and interactions. Motivated by this, we propose an end-to-end deep learning framework to perform Molecular Heterogeneous features Fusion (MolHF) for DTA prediction on heterogeneity. To address the challenges that biochemical attributes locates in different heterogeneous spaces, we design a Molecular Heterogeneous Information Learning module with multi-strategy learning. Especially, Molecular Heterogeneous Attention Fusion module is present to obtain the gains of molecular heterogeneous features. With these, the diversity of molecular structure information for drugs can be extracted. Extensive experiments on two benchmark datasets show that our method outperforms the baselines in all four metrics. Ablation studies validate the effect of attentive fusion and multi-group of drug heterogeneous features. Visual presentations demonstrate the impact of protein embedding level and the model ability of fitting data. In summary, the diverse gains brought by heterogeneous information contribute to drug-target affinity prediction.

  • Learning Pixel Perception for Identity and Illumination Consistency Face Frontalization in the Wild

    Yongtang BAO  Pengfei ZHOU  Yue QI  Zhihui WANG  Qing FAN  

     
    PAPER-Person Image Generation

      Pubricized:
    2022/06/21
      Vol:
    E106-D No:5
      Page(s):
    794-803

    A frontal and realistic face image was synthesized from a single profile face image. It has a wide range of applications in face recognition. Although the frontal face method based on deep learning has made substantial progress in recent years, there is still no guarantee that the generated face has identity consistency and illumination consistency in a significant posture. This paper proposes a novel pixel-based feature regression generative adversarial network (PFR-GAN), which can learn to recover local high-frequency details and preserve identity and illumination frontal face images in an uncontrolled environment. We first propose a Reslu block to obtain richer feature representation and improve the convergence speed of training. We then introduce a feature conversion module to reduce the artifacts caused by face rotation discrepancy, enhance image generation quality, and preserve more high-frequency details of the profile image. We also construct a 30,000 face pose dataset to learn about various uncontrolled field environments. Our dataset includes ages of different races and wild backgrounds, allowing us to handle other datasets and obtain better results. Finally, we introduce a discriminator used for recovering the facial structure of the frontal face images. Quantitative and qualitative experimental results show our PFR-GAN can generate high-quality and high-fidelity frontal face images, and our results are better than the state-of-art results.

  • Enhanced Full Attention Generative Adversarial Networks

    KaiXu CHEN  Satoshi YAMANE  

     
    LETTER-Core Methods

      Pubricized:
    2023/01/12
      Vol:
    E106-D No:5
      Page(s):
    813-817

    In this paper, we propose improved Generative Adversarial Networks with attention module in Generator, which can enhance the effectiveness of Generator. Furthermore, recent work has shown that Generator conditioning affects GAN performance. Leveraging this insight, we explored the effect of different normalization (spectral normalization, instance normalization) on Generator and Discriminator. Moreover, an enhanced loss function called Wasserstein Divergence distance, can alleviate the problem of difficult to train module in practice.

  • Prediction of Driver's Visual Attention in Critical Moment Using Optical Flow

    Rebeka SULTANA  Gosuke OHASHI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/01/26
      Vol:
    E106-D No:5
      Page(s):
    1018-1026

    In recent years, driver's visual attention has been actively studied for driving automation technology. However, the number of models is few to perceive an insight understanding of driver's attention in various moments. All attention models process multi-level image representations by a two-stream/multi-stream network, increasing the computational cost due to an increment of model parameters. However, multi-level image representation such as optical flow plays a vital role in tasks involving videos. Therefore, to reduce the computational cost of a two-stream network and use multi-level image representation, this work proposes a single stream driver's visual attention model for a critical situation. The experiment was conducted using a publicly available critical driving dataset named BDD-A. Qualitative results confirm the effectiveness of the proposed model. Moreover, quantitative results highlight that the proposed model outperforms state-of-the-art visual attention models according to CC and SIM. Extensive ablation studies verify the presence of optical flow in the model, the position of optical flow in the spatial network, the convolution layers to process optical flow, and the computational cost compared to a two-stream model.

  • Chinese Named Entity Recognition Method Based on Dictionary Semantic Knowledge Enhancement

    Tianbin WANG  Ruiyang HUANG  Nan HU  Huansha WANG  Guanghan CHU  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/02/15
      Vol:
    E106-D No:5
      Page(s):
    1010-1017

    Chinese Named Entity Recognition is the fundamental technology in the field of the Chinese Natural Language Process. It is extensively adopted into information extraction, intelligent question answering, and knowledge graph. Nevertheless, due to the diversity and complexity of Chinese, most Chinese NER methods fail to sufficiently capture the character granularity semantics, which affects the performance of the Chinese NER. In this work, we propose DSKE-Chinese NER: Chinese Named Entity Recognition based on Dictionary Semantic Knowledge Enhancement. We novelly integrate the semantic information of character granularity into the vector space of characters and acquire the vector representation containing semantic information by the attention mechanism. In addition, we verify the appropriate number of semantic layers through the comparative experiment. Experiments on public Chinese datasets such as Weibo, Resume and MSRA show that the model outperforms character-based LSTM baselines.

  • Selective Learning of Human Pose Estimation Based on Multi-Scale Convergence Network

    Wenkai LIU  Cuizhu QIN  Menglong WU  Wenle BAI  Hongxia DONG  

     
    LETTER-Human-computer Interaction

      Pubricized:
    2023/02/15
      Vol:
    E106-D No:5
      Page(s):
    1081-1084

    Pose estimation is a research hot spot in computer vision tasks and the key to computer perception of human activities. The core concept of human pose estimation involves describing the motion of the human body through major joint points. Large receptive fields and rich spatial information facilitate the keypoint localization task, and how to capture features on a larger scale and reintegrate them into the feature space is a challenge for pose estimation. To address this problem, we propose a multi-scale convergence network (MSCNet) with a large receptive field and rich spatial information. The structure of the MSCNet is based on an hourglass network that captures information at different scales to present a consistent understanding of the whole body. The multi-scale receptive field (MSRF) units provide a large receptive field to obtain rich contextual information, which is then selectively enhanced or suppressed by the Squeeze-Excitation (SE) attention mechanism to flexibly perform the pose estimation task. Experimental results show that MSCNet scores 73.1% AP on the COCO dataset, an 8.8% improvement compared to the mainstream CMUPose method. Compared to the advanced CPN, the MSCNet has 68.2% of the computational complexity and only 55.4% of the number of parameters.

  • Speech Emotion Recognition Using Multihead Attention in Both Time and Feature Dimensions

    Yue XIE  Ruiyu LIANG  Zhenlin LIANG  Xiaoyan ZHAO  Wenhao ZENG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2023/02/21
      Vol:
    E106-D No:5
      Page(s):
    1098-1101

    To enhance the emotion feature and improve the performance of speech emotion recognition, an attention mechanism is employed to recognize the important information in both time and feature dimensions. In the time dimension, multi-heads attention is modified with the last state of the long short-term memory (LSTM)'s output to match the time accumulation characteristic of LSTM. In the feature dimension, scaled dot-product attention is replaced with additive attention that refers to the method of the state update of LSTM to construct multi-heads attention. This means that a nonlinear change replaces the linear mapping in classical multi-heads attention. Experiments on IEMOCAP datasets demonstrate that the attention mechanism could enhance emotional information and improve the performance of speech emotion recognition.

  • Multimodal Named Entity Recognition with Bottleneck Fusion and Contrastive Learning

    Peng WANG  Xiaohang CHEN  Ziyu SHANG  Wenjun KE  

     
    PAPER-Natural Language Processing

      Pubricized:
    2023/01/18
      Vol:
    E106-D No:4
      Page(s):
    545-555

    Multimodal named entity recognition (MNER) is the task of recognizing named entities in multimodal context. Existing methods focus on utilizing co-attention mechanism to discover the relationships between multiple modalities. However, they still have two deficiencies: First, current methods fail to fuse the multimodal representations in a fine-grained way, which may bring noise of visual modalities. Second, current methods ignore bridging the semantic gap between heterogeneous modalities. To solve the above issues, we propose a novel MNER method with bottleneck fusion and contrastive learning (BFCL). Specifically, we first incorporate the transformer-based bottleneck fusion mechanism, subsequently, information between different modalities can only be exchanged through several bottleneck tokens, thus reducing the noise propagation. Then we propose two decoupled image-text contrastive losses to align the unimodal representations, making the representations of semantically similar modalities closer, while the representations of semantically different modalities farther away. Experimental results demonstrate that our method is competitive to the state-of-the-art models, and achieves 74.54% and 85.70% F1-scores on Twitter-2015 and Twitter-2017 datasets, respectively.

  • Object-ABN: Learning to Generate Sharp Attention Maps for Action Recognition

    Tomoya NITTA  Tsubasa HIRAKAWA  Hironobu FUJIYOSHI  Toru TAMAKI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2022/12/14
      Vol:
    E106-D No:3
      Page(s):
    391-400

    In this paper we propose an extension of the Attention Branch Network (ABN) by using instance segmentation for generating sharper attention maps for action recognition. Methods for visual explanation such as Grad-CAM usually generate blurry maps which are not intuitive for humans to understand, particularly in recognizing actions of people in videos. Our proposed method, Object-ABN, tackles this issue by introducing a new mask loss that makes the generated attention maps close to the instance segmentation result. Further the Prototype Conformity (PC) loss and multiple attention maps are introduced to enhance the sharpness of the maps and improve the performance of classification. Experimental results with UCF101 and SSv2 shows that the generated maps by the proposed method are much clearer qualitatively and quantitatively than those of the original ABN.

  • DFAM-DETR: Deformable Feature Based Attention Mechanism DETR on Slender Object Detection

    Feng WEN  Mei WANG  Xiaojie HU  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2022/12/09
      Vol:
    E106-D No:3
      Page(s):
    401-409

    Object detection is one of the most important aspects of computer vision, and the use of CNNs for object detection has yielded substantial results in a variety of fields. However, due to the fixed sampling in standard convolution layers, it restricts receptive fields to fixed locations and limits CNNs in geometric transformations. This leads to poor performance of CNNs for slender object detection. In order to achieve better slender object detection accuracy and efficiency, this proposed detector DFAM-DETR not only can adjust the sampling points adaptively, but also enhance the ability to focus on slender object features and extract essential information from global to local on the image through an attention mechanism. This study uses slender objects images from MS-COCO dataset. The experimental results show that DFAM-DETR achieves excellent detection performance on slender objects compared to CNN and transformer-based detectors.

  • Spatial-Temporal Aggregated Shuffle Attention for Video Instance Segmentation of Traffic Scene

    Chongren ZHAO  Yinhui ZHANG  Zifen HE  Yunnan DENG  Ying HUANG  Guangchen CHEN  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2022/11/24
      Vol:
    E106-D No:2
      Page(s):
    240-251

    Aiming at the problem of spatial focus regions distribution dispersion and dislocation in feature pyramid networks and insufficient feature dependency acquisition in both spatial and channel dimensions, this paper proposes a spatial-temporal aggregated shuffle attention for video instance segmentation (STASA-VIS). First, an mixed subsampling (MS) module to embed activating features from the low-level target area of feature pyramid into the high-level is designed, so as to aggregate spatial information on target area. Taking advantage of the coherent information in video frames, STASA-VIS uses the first ones of every 5 video frames as the key-frames and then propagates the keyframe feature maps of the pyramid layers forward in the time domain, and fuses with the non-keyframe mixed subsampled features to achieve time-domain consistent feature aggregation. Finally, STASA-VIS embeds shuffle attention in the backbone to capture the pixel-level pairwise relationship and dimensional dependencies among the channels and reduce the computation. Experimental results show that the segmentation accuracy of STASA-VIS reaches 41.2%, and the test speed reaches 34FPS, which is better than the state-of-the-art one stage video instance segmentation (VIS) methods in accuracy and achieves real-time segmentation.

  • Face Hallucination via Multi-Scale Structure Prior Learning

    Yuexi YAO  Tao LU  Kanghui ZHAO  Yanduo ZHANG  Yu WANG  

     
    LETTER-Image

      Pubricized:
    2022/07/19
      Vol:
    E106-A No:1
      Page(s):
    92-96

    Recently, the face hallucination method based on deep learning understands the mapping between low-resolution (LR) and high-resolution (HR) facial patterns by exploring the priors of facial structure. However, how to maintain the face structure consistency after the reconstruction of face images at different scales is still a challenging problem. In this letter, we propose a novel multi-scale structure prior learning (MSPL) for face hallucination. First, we propose a multi-scale structure prior block (MSPB). Considering the loss of high-frequency information in the LR space, we mainly process the input image in three different scale ascending dimensional spaces, and map the image to the high dimensional space to extract multi-scale structural prior information. Then the size of feature maps is recovered by downsampling, and finally the multi-scale information is fused to restore the feature channels. On this basis, we propose a local detail attention module (LDAM) to focus on the local texture information of faces. We conduct extensive face hallucination reconstruction experiments on a public face dataset (LFW) to verify the effectiveness of our method.

  • CAA-Net: End-to-End Two-Branch Feature Attention Network for Single Image Dehazing

    Gang JIN  Jingsheng ZHAI  Jianguo WEI  

     
    PAPER-Digital Signal Processing

      Pubricized:
    2022/07/21
      Vol:
    E106-A No:1
      Page(s):
    1-10

    In this paper, we propose an end-to-end two-branch feature attention network. The network is mainly used for single image dehazing. The network consists of two branches, we call it CAA-Net: 1) A U-NET network composed of different-level feature fusion based on attention (FEPA) structure and residual dense block (RDB). In order to make full use of all the hierarchical features of the image, we use RDB. RDB contains dense connected layers and local feature fusion with local residual learning. We also propose a structure which called FEPA.FEPA structure could retain the information of shallow layer and transfer it to the deep layer. FEPA is composed of serveral feature attention modules (FPA). FPA combines local residual learning with channel attention mechanism and pixel attention mechanism, and could extract features from different channels and image pixels. 2) A network composed of several different levels of FEPA structures. The network could make feature weights learn from FPA adaptively, and give more weight to important features. The final output result of CAA-Net is the combination of all branch prediction results. Experimental results show that the CAA-Net proposed by us surpasses the most advanced algorithms before for single image dehazing.

  • Combating Password Vulnerability with Keystroke Dynamics Featured by WiFi Sensing

    Yuanwei HOU  Yu GU  Weiping LI  Zhi LIU  

     
    PAPER-Mobile Information Network and Personal Communications

      Pubricized:
    2022/04/01
      Vol:
    E105-A No:9
      Page(s):
    1340-1347

    The fast evolving credential attacks have been a great security challenge to current password-based information systems. Recently, biometrics factors like facial, iris, or fingerprint that are difficult to forge rise as key elements for designing passwordless authentication. However, capturing and analyzing such factors usually require special devices, hindering their feasibility and practicality. To this end, we present WiASK, a device-free WiFi sensing enabled Authentication System exploring Keystroke dynamics. More specifically, WiASK captures keystrokes of a user typing a pre-defined easy-to-remember string leveraging the existing WiFi infrastructure. But instead of focusing on the string itself which are vulnerable to password attacks, WiASK interprets the way it is typed, i.e., keystroke dynamics, into user identity, based on the biologically validated correlation between them. We prototype WiASK on the low-cost off-the-shelf WiFi devices and verify its performance in three real environments. Empirical results show that WiASK achieves on average 93.7% authentication accuracy, 2.5% false accept rate, and 5.1% false reject rate.

  • Altered Fingerprints Detection Based on Deep Feature Fusion

    Chao XU  Yunfeng YAN  Lehangyu YANG  Sheng LI  Guorui FENG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2022/06/13
      Vol:
    E105-D No:9
      Page(s):
    1647-1651

    The altered fingerprints help criminals escape from police and cause great harm to the society. In this letter, an altered fingerprint detection method is proposed. The method is constructed by two deep convolutional neural networks to train the time-domain and frequency-domain features. A spectral attention module is added to connect two networks. After the extraction network, a feature fusion module is then used to exploit relationship of two network features. We make ablation experiments and add the module proposed in some popular architectures. Results show the proposed method can improve the performance of altered fingerprint detection compared with the recent neural networks.

  • MSFF: A Multi-Scale Feature Fusion Network for Surface Defect Detection of Aluminum Profiles

    Lianshan SUN  Jingxue WEI  Hanchao DU  Yongbin ZHANG  Lifeng HE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2022/05/30
      Vol:
    E105-D No:9
      Page(s):
    1652-1655

    This paper presents an improved YOLOv3 network, named MSFF-YOLOv3, for precisely detecting variable surface defects of aluminum profiles in practice. First, we introduce a larger prediction scale to provide detailed information for small defect detection; second, we design an efficient attention-guided block to extract more features of defects with less overhead; third, we design a bottom-up pyramid and integrate it with the existing feature pyramid network to construct a twin-tower structure to improve the circulation and fusion of features of different layers. In addition, we employ the K-median algorithm for anchor clustering to speed up the network reasoning. Experimental results showed that the mean average precision of the proposed network MSFF-YOLOv3 is higher than all conventional networks for surface defect detection of aluminum profiles. Moreover, the number of frames processed per second for our proposed MSFF-YOLOv3 could meet real-time requirements.

  • Multi Feature Fusion Attention Learning for Clothing-Changing Person Re-Identification

    Liwei WANG  Yanduo ZHANG  Tao LU  Wenhua FANG  Yu WANG  

     
    LETTER-Image

      Pubricized:
    2022/01/25
      Vol:
    E105-A No:8
      Page(s):
    1170-1174

    Person re-identification (Re-ID) aims to match the same pedestrain identity images across different camera views. Because pedestrians will change clothes frequently for a relatively long time, while many current methods rely heavily on color appearance information or only focus on the person biometric features, these methods make the performance dropped apparently when it is applied to Clohting-Changing. To relieve this dilemma, we proposed a novel Multi Feature Fusion Attention Network (MFFAN), which learns the fine-grained local features. Then we introduced a Clothing Adaptive Attention (CAA) module, which can integrate multiple granularity features to guide model to learn pedestrain's biometric feature. Meanwhile, in order to fully verify the performance of our method on clothing-changing Re-ID problem, we designed a Clothing Generation Network (CGN), which can generate multiple pictures of the same identity wearing different clothes. Finally, experimental results show that our method exceeds the current best method by over 5% and 6% on the VCcloth and PRCC datasets respectively.

  • Latent Influence Based Self-Attention Framework for Heterogeneous Network Embedding

    Yang YAN  Qiuyan WANG  Lin LIU  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2022/03/24
      Vol:
    E105-D No:7
      Page(s):
    1335-1339

    In recent years, Graph Neural Networks has received enormous attention from academia for its huge potential of modeling the network traits such as macrostructure and single node attributes. However, prior mainstream works mainly focus on homogeneous network and lack the capacity to characterize the network heterogeneous property. Besides, most previous literature cannot the model latent influence link under microscope vision, making it infeasible to model the joint relation between the heterogeneity and mutual interaction within multiple relation type. In this letter, we propose a latent influence based self-attention framework to address the difficulties mentioned above. To model the heterogeneity and mutual interactions, we redesign the attention mechanism with latent influence factor on single-type relation level, which learns the importance coefficient from its adjacent neighbors under the same meta-path based patterns. To incorporate the heterogeneous meta-path in a unified dimension, we developed a novel self-attention based framework for meta-path relation fusion according to the learned meta-path coefficient. Our experimental results demonstrate that our framework not only achieves higher results than current state-of-the-art baselines, but also shows promising vision on depicting heterogeneous interactive relations under complicated network structure.

  • A Survey on Explainable Fake News Detection

    Ken MISHIMA  Hayato YAMANA  

     
    SURVEY PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2022/04/22
      Vol:
    E105-D No:7
      Page(s):
    1249-1257

    The increasing amount of fake news is a growing problem that will progressively worsen in our interconnected world. Machine learning, particularly deep learning, is being used to detect misinformation; however, the models employed are essentially black boxes, and thus are uninterpretable. This paper presents an overview of explainable fake news detection models. Specifically, we first review the existing models, datasets, evaluation techniques, and visualization processes. Subsequently, possible improvements in this field are identified and discussed.

  • An Attention Nested U-Structure Suitable for Salient Ship Detection in Complex Maritime Environment

    Weina ZHOU  Ying ZHOU  Xiaoyang ZENG  

     
    PAPER-Information Network

      Pubricized:
    2022/03/23
      Vol:
    E105-D No:6
      Page(s):
    1164-1171

    Salient ship detection plays an important role in ensuring the safety of maritime transportation and navigation. However, due to the influence of waves, special weather, and illumination on the sea, existing saliency methods are still unable to achieve effective ship detection in a complex marine environment. To solve the problem, this paper proposed a novel saliency method based on an attention nested U-Structure (AU2Net). First, to make up for the shortcomings of the U-shaped structure, the pyramid pooling module (PPM) and global guidance paths (GGPs) are designed to guide the restoration of feature information. Then, the attention modules are added to the nested U-shaped structure to further refine the target characteristics. Ultimately, multi-level features and global context features are integrated through the feature aggregation module (FAM) to improve the ability to locate targets. Experiment results demonstrate that the proposed method could have at most 36.75% improvement in F-measure (Favg) compared to the other state-of-the-art methods.

21-40hit(111hit)