The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] affect(9hit)

1-9hit
  • Attractiveness Computing in Image Media

    Toshihiko YAMASAKI  

     
    INVITED PAPER-Vision

      Pubricized:
    2023/06/16
      Vol:
    E106-A No:9
      Page(s):
    1196-1201

    Our research group has been working on attractiveness prediction, reasoning, and even enhancement for multimedia content, which we call “attractiveness computing.” Attractiveness includes impressiveness, instagrammability, memorability, clickability, and so on. Analyzing such attractiveness was usually done by experienced professionals but we have experimentally revealed that artificial intelligence (AI) based on big multimedia data can imitate or reproduce professionals' skills in some cases. In this paper, we introduce some of the representative works and possible real-life applications of our attractiveness computing for image media.

  • A Personality Model Based on NEO PI-R for Emotion Simulation

    Yi ZHANG  Ling LI  

     
    PAPER-Affective Computing

      Vol:
    E97-D No:8
      Page(s):
    2000-2007

    The last decade has witnessed an explosion of interest in research on human emotion modeling for generating intelligent virtual agents. This paper proposes a novel personality model based on the Revised NEO Personality Inventory (NEO PI-R). Compared to the popular Big-Five-Personality Factors (Big5) model, our proposed model is more capable than Big5 on describing a variety of personalities. Combining with emotion models it helps to produce more reasonable emotional reactions to external stimuli. A novel Resistant formulation is also proposed to effectively simulate the complicated negative emotions. Emotional reactions towards multiple stimuli are also effectively simulated with the proposed personality model.

  • Multimodal Affect Recognition Using Boltzmann Zippers

    Kun LU  Xin ZHANG  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:11
      Page(s):
    2496-2499

    This letter presents a novel approach for automatic multimodal affect recognition. The audio and visual channels provide complementary information for human affective states recognition, and we utilize Boltzmann zippers as model-level fusion to learn intrinsic correlations between the different modalities. We extract effective audio and visual feature streams with different time scales and feed them to two component Boltzmann chains respectively. Hidden units of the two chains are interconnected to form a Boltzmann zipper which can effectively avoid local energy minima during training. Second-order methods are applied to Boltzmann zippers to speed up learning and pruning process. Experimental results on audio-visual emotion data recorded by ourselves in Wizard of Oz scenarios and collected from the SEMAINE naturalistic database both demonstrate our approach is robust and outperforms the state-of-the-art methods.

  • Generating and Describing Affective Eye Behaviors

    Xia MAO  Zheng LI  

     
    PAPER-Kansei Information Processing, Affective Information Processing

      Vol:
    E93-D No:5
      Page(s):
    1282-1290

    The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.

  • Automatic Affect Recognition Using Natural Language Processing Techniques and Manually Built Affect Lexicon

    Young Hwan CHO  Kong Joo LEE  

     
    PAPER-Natural Language Processing

      Vol:
    E89-D No:12
      Page(s):
    2964-2971

    In this paper, we present preliminary work on recognizing affect from a Korean textual document by using a manually built affect lexicon and adopting natural language processing tools. A manually built affect lexicon is constructed in order to be able to detect various emotional expressions, and its entries consist of emotion vectors. The natural language processing tools analyze an input document to enhance the accuracy of our affect recognizer. The performance of our affect recognizer is evaluated through automatic classification of song lyrics according to moods.

  • A Computational Model for Recognizing Emotion with Intensity for Machine Vision Applications

    P. Ravindra De SILVA  Minetada OSANO  Ashu MARASINGHE  Ajith P. MADURAPPERUMA  

     
    PAPER-Face, Gesture, and Action Recognition

      Vol:
    E89-D No:7
      Page(s):
    2171-2179

    One of the challenging issues in affective computing is to give a machine the ability to recognize the affective states with intensity of a person. Few studies are directed toward this goal by categorizing affective behavior of the person into a set of discrete categories. But still two problems exist: gesture is not yet a concern as a channel of affective communication in interactive technology, and existing systems only model discrete categories but not affective dimensions, e.g., intensity. Modeling the intensity of emotion has been well addressed in synthetic autonomous agent and virtual environment literature, but there is an evident lack of attention in other important research areas such as affective computing, machine vision, and robotic. In this work, we propose an affective gesture recognition system that can recognize the emotion of a child and the intensity of the emotion states in a scenario of game playing. We used levels of cognitive and non-cognitive appraisal factors to estimate intensity of emotion. System has an intelligent agent (called Mix) that takes these factors into consideration and adapt the game state to create a more positive interactive environment for the child.

  • Human Physiology as a Basis for Designing and Evaluating Affective Communication with Life-Like Characters

    Helmut PRENDINGER  Mitsuru ISHIZUKA  

     
    INVITED PAPER

      Vol:
    E88-D No:11
      Page(s):
    2453-2460

    This paper highlights some of our recent research efforts in designing and evaluating life-like characters that are capable of entertaining affective and social communication with human users. The key novelty of our approach is the use of human physiological information: first, as a method to evaluate the effect of life-like character behavior on a moment-to-moment basis, and second, as an input modality for a new generation of interface agents that we call 'physiologically perceptive' life-like characters. By exploiting the stream of primarily involuntary human responses, such as autonomic nervous system activity or eye movements, those characters are expected to respond to users' affective and social needs in a truly sensitive, and hence effective, friendly, and beneficial way.

  • Developments in Corpus-Based Speech Synthesis: Approaching Natural Conversational Speech

    Nick CAMPBELL  

     
    INVITED PAPER

      Vol:
    E88-D No:3
      Page(s):
    376-383

    This paper describes the special demands of conversational speech in the context of corpus-based speech synthesis. The author proposed the CHATR system of prosody-based unit-selection for concatenative waveform synthesis seven years ago, and now extends this work to incorporate the results of an analysis of five-years of recordings of spontaneous conversational speeech in a wide range of actual daily-life situations. The paper proposes that the expresion of affect (often translated as 'kansei' in Japanese) is the main factor differentiating laboratory speech from real-world conversational speech, and presents a framework for the specification of affect through differences in speaking style and voice quality. Having an enormous corpus of speech samples available for concatenation allows the selection of complete phrase-sized utterance segments, and changes the focus of unit selection from segmental or phonetic continuity to one of prosodic and discoursal appropriateness instead. Samples of the resulting large-corpus-based synthesis can be heard at http://feast.his.atr.jp/AESOP.

  • Ka-Band LMS Channel Model with Rain Attenuation and Other Atmospheric Impairments in Equatorial Zone

    Wenzhen LI  Choi Look LAW  Jin Teong ONG  Vimal Kishore DUBEY  

     
    PAPER-Antenna and Propagation

      Vol:
    E84-B No:12
      Page(s):
    3265-3273

    In this paper, the statistical characteristics of rain attenuation in the equatorial zone are investigated. A more reasonable LMS channel model incorporating weather impairments is proposed and compared to the weather-affected Ka-band land mobile satellite (LMS) channel model suggested by Loo. The proposed LMS model uses Lutz's LMS channel model as its basis. The PDF of the received signal and BER performance derived from Loo's model and the proposed channel model are quantified and compared to verify the effectiveness of the proposed model. Finally, the influence of weather impairments on the BER performance is evaluated under various weather conditions, which clearly shows the superiority of the proposed model.