The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Bi-LSTM(3hit)

1-3hit
  • Image Captioning Algorithm Based on Multi-Branch CNN and Bi-LSTM

    Shan HE  Yuanyao LU  Shengnan CHEN  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2021/04/19
      Vol:
    E104-D No:7
      Page(s):
    941-947

    The development of deep learning and neural networks has brought broad prospects to computer vision and natural language processing. The image captioning task combines cutting-edge methods in two fields. By building an end-to-end encoder-decoder model, its description performance can be greatly improved. In this paper, the multi-branch deep convolutional neural network is used as the encoder to extract image features, and the recurrent neural network is used to generate descriptive text that matches the input image. We conducted experiments on Flickr8k, Flickr30k and MSCOCO datasets. According to the analysis of the experimental results on evaluation metrics, the model proposed in this paper can effectively achieve image caption, and its performance is better than classic image captioning models such as neural image annotation models.

  • Tweet Stance Detection Using Multi-Kernel Convolution and Attentive LSTM Variants

    Umme Aymun SIDDIQUA  Abu Nowshed CHY  Masaki AONO  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2019/09/25
      Vol:
    E102-D No:12
      Page(s):
    2493-2503

    Stance detection in twitter aims at mining user stances expressed in a tweet towards a single or multiple target entities. Detecting and analyzing user stances from massive opinion-oriented twitter posts provide enormous opportunities to journalists, governments, companies, and other organizations. Most of the prior studies have explored the traditional deep learning models, e.g., long short-term memory (LSTM) and gated recurrent unit (GRU) for detecting stance in tweets. However, compared to these traditional approaches, recently proposed densely connected bidirectional LSTM and nested LSTMs architectures effectively address the vanishing-gradient and overfitting problems as well as dealing with long-term dependencies. In this paper, we propose a neural network model that adopts the strengths of these two LSTM variants to learn better long-term dependencies, where each module coupled with an attention mechanism that amplifies the contribution of important elements in the final representation. We also employ a multi-kernel convolution on top of them to extract the higher-level tweet representations. Results of extensive experiments on single and multi-target benchmark stance detection datasets show that our proposed method achieves substantial improvement over the current state-of-the-art deep learning based methods.

  • An Attention-Based Hybrid Neural Network for Document Modeling

    Dengchao HE  Hongjun ZHANG  Wenning HAO  Rui ZHANG  Huan HAO  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/03/21
      Vol:
    E100-D No:6
      Page(s):
    1372-1375

    The purpose of document modeling is to learn low-dimensional semantic representations of text accurately for Natural Language Processing tasks. In this paper, proposed is a novel attention-based hybrid neural network model, which would extract semantic features of text hierarchically. Concretely, our model adopts a bidirectional LSTM module with word-level attention to extract semantic information for each sentence in text and subsequently learns high level features via a dynamic convolution neural network module. Experimental results demonstrate that our proposed approach is effective and achieve better performance than conventional methods.