The search functionality is under construction.

Author Search Result

[Author] Tatsuya HIRAHARA(3hit)

1-3hit
  • Numerical Simulation of Air Flow through Glottis during Very Weak Whisper Sound Production

    Makoto OTANI  Tatsuya HIRAHARA  

     
    PAPER-Speech and Hearing

      Vol:
    E94-A No:9
      Page(s):
    1779-1785

    A non-audible murmur (NAM), a very weak whisper sound produced without vocal fold vibration, has been researched in the development of a silent-speech communication tool for functional speech disorders as well as human-to-human/machine interfaces with inaudible voice input. The NAM can be detected using a specially designed microphone, called a NAM microphone, attached to the neck. However, the detected NAM signal has a low signal-to-noise ratio and severely suppressed high-frequency component. To improve NAM clarity, the mechanism of a NAM production must be clarified. In this work, an air flow through a glottis in the vocal tract was numerically simulated using computational fluid dynamics and vocal tract shape models that are obtained by a magnetic resonance imaging (MRI) scan for whispered voice production with various strengths, i.e. strong, weak, and very weak. For a very weak whispering during the MRI scan, subjects were trained, just before the scanning, to produce the very weak whispered voice, or the NAM. The numerical results show that a weak vorticity flow occurs in the supraglottal region even during a very weak whisper production; such vorticity flow provide aeroacoustic sources for a very weak whispering, i.e. NAM, as in an ordinary whispering.

  • Loss Function Considering Multiple Attributes of a Temporal Sequence for Feed-Forward Neural Networks

    Noriyuki MATSUNAGA  Yamato OHTANI  Tatsuya HIRAHARA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2020/08/31
      Vol:
    E103-D No:12
      Page(s):
    2659-2672

    Deep neural network (DNN)-based speech synthesis became popular in recent years and is expected to soon be widely used in embedded devices and environments with limited computing resources. The key intention of these systems in poor computing environments is to reduce the computational cost of generating speech parameter sequences while maintaining voice quality. However, reducing computational costs is challenging for two primary conventional DNN-based methods used for modeling speech parameter sequences. In feed-forward neural networks (FFNNs) with maximum likelihood parameter generation (MLPG), the MLPG reconstructs the temporal structure of the speech parameter sequences ignored by FFNNs but requires additional computational cost according to the sequence length. In recurrent neural networks, the recursive structure allows for the generation of speech parameter sequences while considering temporal structures without the MLPG, but increases the computational cost compared to FFNNs. We propose a new approach for DNNs to acquire parameters captured from the temporal structure by backpropagating the errors of multiple attributes of the temporal sequence via the loss function. This method enables FFNNs to generate speech parameter sequences by considering their temporal structure without the MLPG. We generated the fundamental frequency sequence and the mel-cepstrum sequence with our proposed method and conventional methods, and then synthesized and subjectively evaluated the speeches from these sequences. The proposed method enables even FFNNs that work on a frame-by-frame basis to generate speech parameter sequences by considering the temporal structure and to generate sequences perceptually superior to those from the conventional methods.

  • Auditory Artifacts due to Switching Head-Related Transfer Functions of a Dynamic Virtual Auditory Display

    Makoto OTANI  Tatsuya HIRAHARA  

     
    PAPER

      Vol:
    E91-A No:6
      Page(s):
    1320-1328

    Auditory artifacts due to switching head-related transfer functions (HRTFs) are investigated, using a software-implemented dynamic virtual auditory display (DVAD) developed by the authors. The DVAD responds to a listener's head rotation using a head-tracking device and switching HRTFs to present a highly realistic 3D virtual auditory space to the listener. The DVAD operates on Windows XP and does not require high-performance computers. A total system latency (TSL), which is the delay between head motion and the corresponding change of the ear input signal, is a significant factor of DVADs. The measured TSL of our DVAD is about 50 ms, which is sufficient for practical applications and localization experiments. Another matter of concern is the auditory artifact in DVADs caused by switching HRTFs. Switching HRTFs gives rise to wave discontinuity of synthesized binaural signals, which can be perceived as click noises that degrade the quality of presented sound image. A subjective test and excitation patterns (EPNs) analysis using an auditory filter are performed with various source signals and HRTF spatial resolutions. The results of the subjective test reveal that click noise perception depends on the source signal and the HRTF spatial resolution. Furthermore, EPN analysis reveals that switching HRTFs significantly distorts the EPNs at the off signal frequencies. Such distortions, however, are masked perceptually by broad-bandwidth source signals, whereas they are not masked by narrow-bandwidth source signals, thereby making the click noise more detectable. A higher HRTF spatial resolution leads to smaller distortions. But, depending on the source signal, perceivable click noises still remain even with 0.5-degree spatial resolution, which is less than minimum audible angle (1 degree in front).