The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] rule-based speech synthesis(2hit)

1-2hit
  • High Quality Speech Synthesis Based on the Reproduction of the Randomness in Speech Signals

    Naofumi AOKI  

     
    PAPER-Image & Signal Processing

      Vol:
    E84-A No:9
      Page(s):
    2198-2206

    A high quality speech synthesis technique based on the wavelet subband analysis of speech signals was newly devised for enhancing the naturalness of synthesized voiced consonant speech. The technique reproduces a speech characteristic of voiced consonant speech that shows unvoiced feature remarkably in the high frequency subbands. For mixing appropriately the unvoiced feature into voiced speech, a noise inclusion procedure that employed the discrete wavelet transform was proposed. This paper also describes a developed speech synthesizer that employs several random fractal techniques. These techniques were employed for enhancing especially the naturalness of synthesized purely voiced speech. Three types of fluctuations, (1) pitch period fluctuation, (2) amplitude fluctuation, and (3) waveform fluctuation were treated in the speech synthesizer. In addition, instead of a normal impulse train, a triangular pulse was used as a simple model for the glottal excitation pulse. For the compensation for the degraded frequency characteristic of the triangular pulse that overdecreases than the spectral -6 dB/oct characteristic required for the glottal excitation pulse, the random fractal interpolation technique was applied. In order to evaluate the developed speech synthesis system, psychoacoustic experiments were carried out. The experiments especially focused on how the mixed excitation scheme effectively contributed to enhancing the naturalness of voiced consonant speech. In spite that the proposed techniques were just a little modification for enhancing the conventional LPC (linear predictive coding) speech synthesizer, the subjective evaluation suggested that the system could effectively gain the naturalness of the synthesized speech that tended to degrade in the conventional LPC speech synthesis scheme.

  • Development of a Rule-Based Speech Synthesizer Module for Embedded Use

    Mikio YAMAGUCHI  John-Paul HOSOM  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1990-1998

    A module for rule-based Japanese speech synthesis has been developed. The synthesizer was constructed using the Multiple-Cascade Terminal Analog (MCTA) structure, and this sturcture has been improved in three respects: the voicing-source model has an increased number of variable parameters which allows for voicing-source waveforms that better approximate natural speech; the spectral characteristics of the fricative source have been improved; and the path used for nasal consonants has an increased number of resonators to better conform to theory. The current synthesis system uses a modified stored-pattern data structure which allows better transitions between syllables; however, time-invariant values are used in certain cases in order to decrease the amount of required memory. This system also has a new consolidated method for generating geminate obstruents and syllabic nasals. This synthesizer and synthesis system have been implemented in a re-developed rule-based speech-synthesis module. This module has been constructed using ASIC technology and has both small size (56368 mm) and light weight (19g); it is therefore possible to embed it in various types of portable or moving machinery. The module can be connected directly to a mocroprocessor bus and accepts as input sentences which are generated by the host computer. The input sentences are written with the Japanese katakana or romaji syllabaries and other symbols which describe the sentence structure. The syllable articulation rate for one hundred Japanese syllables (including palatalized sounds) is 65% and for sixty-seven syllables (not including palatalized sounds) is 74%. The word intelligibility, measured using phonetically-balanced words, it 88%.