The search functionality is under construction.

Keyword Search Result

[Keyword] music generation(2hit)

1-2hit
  • Dance-Conditioned Artistic Music Generation by Creative-GAN Open Access

    Jiang HUANG  Xianglin HUANG  Lifang YANG  Zhulin TAO  

     
    PAPER-Multimedia Environment Technology

      Pubricized:
    2023/08/23
      Vol:
    E107-A No:5
      Page(s):
    836-844

    We present a novel adversarial, end-to-end framework based on Creative-GAN to generate artistic music conditioned on dance videos. Our proposed framework takes the visual and motion posture data as input, and then adopts a quantized vector as the audio representation to generate complex music corresponding to input. However, the GAN algorithm just imitate and reproduce works what humans have created, instead of generating something new and creative. Therefore, we newly introduce Creative-GAN, which extends the original GAN framework to two discriminators, one is to determine whether it is real music, and the other is to classify music style. The paper shows that our proposed Creative-GAN can generate novel and interesting music which is not found in the training dataset. To evaluate our model, a comprehensive evaluation scheme is introduced to make subjective and objective evaluation. Compared with the advanced methods, our experimental results performs better in measureing the music rhythm, generation diversity, dance-music correlation and overall quality of generated music.

  • Online EEG-Based Emotion Prediction and Music Generation for Inducing Affective States

    Kana MIYAMOTO  Hiroki TANAKA  Satoshi NAKAMURA  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2022/02/15
      Vol:
    E105-D No:5
      Page(s):
    1050-1063

    Music is often used for emotion induction because it can change the emotions of people. However, since we subjectively feel different emotions when listening to music, we propose an emotion induction system that generates music that is adapted to each individual. Our system automatically generates suitable music for emotion induction based on the emotions predicted from an electroencephalogram (EEG). We examined three elements for constructing our system: 1) a music generator that creates music that induces emotions that resemble the inputs, 2) emotion prediction using EEG in real-time, and 3) the control of a music generator using the predicted emotions for making music that is suitable for inducing emotions. We constructed our proposed system using these elements and evaluated it. The results showed its effectiveness for inducing emotions and suggest that feedback loops that tailor stimuli to individuals can successfully induce emotions.