The search functionality is under construction.
The search functionality is under construction.

Open Access
Attention-Based Dense LSTM for Speech Emotion Recognition

Yue XIE, Ruiyu LIANG, Zhenlin LIANG, Li ZHAO

  • Full Text Views

    140

  • Cite this
  • Free PDF (322.6KB)

Summary :

Despite the widespread use of deep learning for speech emotion recognition, they are severely restricted due to the information loss in the high layer of deep neural networks, as well as the degradation problem. In order to efficiently utilize information and solve degradation, attention-based dense long short-term memory (LSTM) is proposed for speech emotion recognition. LSTM networks with the ability to process time series such as speech are constructed into which attention-based dense connections are introduced. That means the weight coefficients are added to skip-connections of each layer to distinguish the difference of the emotional information between layers and avoid the interference of redundant information from the bottom layer to the effective information from the top layer. The experiments demonstrate that proposed method improves the recognition performance by 12% and 7% on eNTERFACE and IEMOCAP corpus respectively.

Publication
IEICE TRANSACTIONS on Information Vol.E102-D No.7 pp.1426-1429
Publication Date
2019/07/01
Publicized
2019/04/17
Online ISSN
1745-1361
DOI
10.1587/transinf.2019EDL8019
Type of Manuscript
LETTER
Category
Pattern Recognition

Authors

Yue XIE
  Southeast University
Ruiyu LIANG
  Nanjing Institute of Technology
Zhenlin LIANG
  Southeast University
Li ZHAO
  Southeast University

Keyword