1-2hit |
This letter presents a novel approach for automatic multimodal affect recognition. The audio and visual channels provide complementary information for human affective states recognition, and we utilize Boltzmann zippers as model-level fusion to learn intrinsic correlations between the different modalities. We extract effective audio and visual feature streams with different time scales and feed them to two component Boltzmann chains respectively. Hidden units of the two chains are interconnected to form a Boltzmann zipper which can effectively avoid local energy minima during training. Second-order methods are applied to Boltzmann zippers to speed up learning and pruning process. Experimental results on audio-visual emotion data recorded by ourselves in Wizard of Oz scenarios and collected from the SEMAINE naturalistic database both demonstrate our approach is robust and outperforms the state-of-the-art methods.
In this paper, we present preliminary work on recognizing affect from a Korean textual document by using a manually built affect lexicon and adopting natural language processing tools. A manually built affect lexicon is constructed in order to be able to detect various emotional expressions, and its entries consist of emotion vectors. The natural language processing tools analyze an input document to enhance the accuracy of our affect recognizer. The performance of our affect recognizer is evaluated through automatic classification of song lyrics according to moods.