The search functionality is under construction.

Author Search Result

[Author] Kento WATANABE(3hit)

1-3hit
  • Modeling Storylines in Lyrics

    Kento WATANABE  Yuichiroh MATSUBAYASHI  Kentaro INUI  Satoru FUKAYAMA  Tomoyasu NAKANO  Masataka GOTO  

     
    PAPER-Natural Language Processing

      Pubricized:
    2017/12/22
      Vol:
    E101-D No:4
      Page(s):
    1167-1179

    This paper addresses the issue of modeling the discourse nature of lyrics and presented the first study aiming at capturing the two common discourse-related notions: storylines and themes. We assume that a storyline is a chain of transitions over topics of segments and a song has at least one entire theme. We then hypothesize that transitions over topics of lyric segments can be captured by a probabilistic topic model which incorporates a distribution over transitions of latent topics and that such a distribution of topic transitions is affected by the theme of lyrics. Aiming to test those hypotheses, this study conducts experiments on the word prediction and segment order prediction tasks exploiting a large-scale corpus of popular music lyrics for both English and Japanese (around 100 thousand songs). The findings we gained from these experiments can be summarized into two respects. First, the models with topic transitions significantly outperformed the model without topic transitions in word prediction. This result indicates that typical storylines included in our lyrics datasets were effectively captured as a probabilistic distribution of transitions over latent topics of segments. Second, the model incorporating a latent theme variable on top of topic transitions outperformed the models without such variables in both word prediction and segment order prediction. From this result, we can conclude that considering the notion of theme does contribute to the modeling of storylines of lyrics.

  • Heartbeat Interval Error Compensation Method for Low Sampling Rates Photoplethysmography Sensors

    Kento WATANABE  Shintaro IZUMI  Yuji YANO  Hiroshi KAWAGUCHI  Masahiko YOSHIMOTO  

     
    PAPER

      Pubricized:
    2019/12/25
      Vol:
    E103-B No:6
      Page(s):
    645-652

    This study presents a method for improving the heartbeat interval accuracy of photoplethysmographic (PPG) sensors at ultra-low sampling rates. Although sampling rate reduction can extend battery life, it increases the sampling error and degrades the accuracy of the extracted heartbeat interval. To overcome these drawbacks, a sampling-error compensation method is proposed in this study. The sampling error is reduced by using linear interpolation and autocorrelation based on the waveform similarity of heartbeats in PPG. Furthermore, this study introduces two-line approximation and first derivative PPG (FDPPG) to improve the waveform similarity at ultra-low sampling rates. The proposed method was evaluated using measured PPG and reference electrocardiogram (ECG) of seven subjects. The results reveal that the mean absolute error (MAE) of 4.11ms was achieved for the heartbeat intervals at a sampling rate of 10Hz, compared with 1-kHz ECG sampling. The heartbeat interval error was also evaluated based on a heart rate variability (HRV) analysis. Furthermore, the mean absolute percentage error (MAPE) of the low-frequency/high-frequency (LF/HF) components obtained from the 10-Hz PPG is shown to decrease from 38.3% to 3.3%. This error is small enough for practical HRV analysis.

  • A Method to Detect Chorus Sections in Lyrics Text

    Kento WATANABE  Masataka GOTO  

     
    PAPER-Music Information Processing

      Pubricized:
    2023/06/02
      Vol:
    E106-D No:9
      Page(s):
    1600-1609

    This paper addresses the novel task of detecting chorus sections in English and Japanese lyrics text. Although chorus-section detection using audio signals has been studied, whether chorus sections can be detected from text-only lyrics is an open issue. Another open issue is whether patterns of repeating lyric lines such as those appearing in chorus sections depend on language. To investigate these issues, we propose a neural-network-based model for sequence labeling. It can learn phrase repetition and linguistic features to detect chorus sections in lyrics text. It is, however, difficult to train this model since there was no dataset of lyrics with chorus-section annotations as there was no prior work on this task. We therefore generate a large amount of training data with such annotations by leveraging pairs of musical audio signals and their corresponding manually time-aligned lyrics; we first automatically detect chorus sections from the audio signals and then use their temporal positions to transfer them to the line-level chorus-section annotations for the lyrics. Experimental results show that the proposed model with the generated data contributes to detecting the chorus sections, that the model trained on Japanese lyrics can detect chorus sections surprisingly well in English lyrics, and that patterns of repeating lyric lines are language-independent.