The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Nobuaki MINEMATSU(10hit)

1-10hit
  • Speaker Verification in Realistic Noisy Environment in Forensic Science

    Toshiaki KAMADA  Nobuaki MINEMATSU  Takashi OSANAI  Hisanori MAKINAE  Masumi TANIMOTO  

     
    PAPER-Speaker Verification

      Vol:
    E91-D No:3
      Page(s):
    558-566

    In forensic voice telephony speaker verification, we may be requested to identify a speaker in a very noisy environment, unlike the conditions in general research. In a noisy environment, we process speech first by clarifying it. However, the previous study of speaker verification from clarified speech did not yield satisfactory results. In this study, we experimented on speaker verification with clarification of speech in a noisy environment, and we examined the relationship between improving acoustic quality and speaker verification results. Moreover, experiments with realistic noise such as a crime prevention alarm and power supply noise was conducted, and speaker verification accuracy in a realistic environment was examined. We confirmed the validity of speaker verification with clarification of speech in a realistic noisy environment.

  • Accent Sandhi Estimation of Tokyo Dialect of Japanese Using Conditional Random Fields Open Access

    Masayuki SUZUKI  Ryo KUROIWA  Keisuke INNAMI  Shumpei KOBAYASHI  Shinya SHIMIZU  Nobuaki MINEMATSU  Keikichi HIROSE  

     
    INVITED PAPER

      Pubricized:
    2016/12/08
      Vol:
    E100-D No:4
      Page(s):
    655-661

    When synthesizing speech from Japanese text, correct assignment of accent nuclei for input text with arbitrary contents is indispensable in obtaining naturally-sounding synthetic speech. A phenomenon called accent sandhi occurs in utterances of Japanese; when a word is uttered in a sentence, its accent nucleus may change depending on the contexts of preceding/succeeding words. This paper describes a statistical method for automatically predicting the accent nucleus changes due to accent sandhi. First, as the basis of the research, a database of Japanese text was constructed with labels of accent phrase boundaries and accent nucleus positions when uttered in sentences. A single native speaker of Tokyo dialect Japanese annotated all the labels for 6,344 Japanese sentences. Then, using this database, a conditional-random-field-based method was developed using this database to predict accent phrase boundaries and accent nuclei. The proposed method predicted accent nucleus positions for accent phrases with 94.66% accuracy, clearly surpassing the 87.48% accuracy obtained using our rule-based method. A listening experiment was also conducted on synthetic speech obtained using the proposed method and that obtained using the rule-based method. The results show that our method significantly improved the naturalness of synthetic speech.

  • Development and Evaluation of Online Infrastructure to Aid Teaching and Learning of Japanese Prosody Open Access

    Nobuaki MINEMATSU  Ibuki NAKAMURA  Masayuki SUZUKI  Hiroko HIRANO  Chieko NAKAGAWA  Noriko NAKAMURA  Yukinori TAGAWA  Keikichi HIROSE  Hiroya HASHIMOTO  

     
    INVITED PAPER

      Pubricized:
    2016/12/22
      Vol:
    E100-D No:4
      Page(s):
    662-669

    This paper develops an online and freely available framework to aid teaching and learning the prosodic control of Tokyo Japanese: how to generate its adequate word accent and phrase intonation. This framework is called OJAD (Online Japanese Accent Dictionary) [1] and it provides three features. 1) Visual, auditory, systematic, and comprehensive illustration of patterns of accent change (accent sandhi) of verbs and adjectives. Here only the changes caused by twelve fundamental conjugations are focused upon. 2) Visual illustration of the accent pattern of a given verbal expression, which is a combination of a verb and its postpositional auxiliary words. 3) Visual illustration of the pitch pattern of any given sentence and the expected positions of accent nuclei in the sentence. The third feature is technically implemented by using an accent change prediction module that we developed for Japanese Text-To-Speech (TTS) synthesis [2],[3]. Experiments show that accent nucleus assignment to given texts by the proposed framework is much more accurate than that by native speakers. Subjective assessment and objective assessment done by teachers and learners show extremely high pedagogical effectiveness of the developed framework.

  • Automatic Estimation of Accentual Attribute Values of Words for Accent Sandhi Rules of Japanese Text-to-Speech Conversion

    Nobuaki MINEMATSU  Ryuji KITA  Keikichi HIROSE  

     
    PAPER-Speech Synthesis and Prosody

      Vol:
    E86-D No:3
      Page(s):
    550-557

    Accurate estimation of accentual attribute values of words, which is required to apply rules of Japanese word accent sandhi to prosody generation, is an important factor to realize high-quality text-to-speech (TTS) conversion. The rules were already formulated by Sagisaka et al. and are widely used in Japanese TTS conversion systems. Application of these rules, however, requires values of a few accentual attributes of each constituent word of input text. The attribute values cannot be found in any public database or any accent dictionaries of Japanese. Further, these values are difficult even for native speakers of Japanese to estimate only with their introspective consideration of properties of their mother tongue. In this paper, an algorithm was proposed, where these values were automatically estimated from a large amount of data of accent types of accentual phrases, which were collected through a long series of listening experiments. In the proposed algorithm, inter-speaker differences of knowledge of accent sandhi were well considered. To improve the coverage of the estimated values over the obtained data, the rules were tentatively modified. Evaluation experiments using two-mora accentual phrases showed the high validity of the estimated values and the modified rules and also some defects caused by varieties of linguistic expressions of Japanese.

  • Regularized Maximum Likelihood Linear Regression Adaptation for Computer-Assisted Language Learning Systems

    Dean LUO  Yu QIAO  Nobuaki MINEMATSU  Keikichi HIROSE  

     
    PAPER-Educational Technology

      Vol:
    E94-D No:2
      Page(s):
    308-316

    This study focuses on speaker adaptation techniques for Computer-Assisted Language Learning (CALL). We first investigate the effects and problems of Maximum Likelihood Linear Regression (MLLR) speaker adaptation when used in pronunciation evaluation. Automatic scoring and error detection experiments are conducted on two publicly available databases of Japanese learners' English pronunciation. As we expected, over-adaptation causes misjudgment of pronunciation accuracy. Following the analysis, we propose a novel method, Regularized Maximum Likelihood Regression (Regularized-MLLR) adaptation, to solve the problem of the adverse effects of MLLR adaptation. This method uses a group of teachers' data to regularize learners' transformation matrices so that erroneous pronunciations will not be erroneously transformed as correct ones. We implement this idea in two ways: one is using the average of the teachers' transformation matrices as a constraint to MLLR, and the other is using linear combinations of the teachers' matrices to represent learners' transformations. Experimental results show that the proposed methods can better utilize MLLR adaptation and avoid over-adaptation.

  • Separation of Mixed Audio Signals by Decomposing Hilbert Spectrum with Modified EMD

    Md. Khademul Islam MOLLA  Keikichi HIROSE  Nobuaki MINEMATSU  

     
    PAPER-Speech/Audio Processing

      Vol:
    E89-A No:3
      Page(s):
    727-734

    The Hilbert transformation together with empirical mode decomposition (EMD) produces Hilbert spectrum (HS) which is a fine-resolution time-frequency representation of any nonlinear and non-stationary signal. The EMD decomposes the mixture signal into some oscillatory components each one is called intrinsic mode function (IMF). Some modification of the conventional EMD is proposed here. The instantaneous frequency of every real valued IMF component is computed with Hilbert transformation. The HS is constructed by arranging the instantaneous frequency spectra of IMF components. The HS of the mixture signal is decomposed into subspaces corresponding to the component sources. The decomposition is performed by applying independent component analysis (ICA) and Kulback-Leibler divergence based K-means clustering on the selected number of bases derived from HS of the mixture. The time domain source signals are assembled by applying some post processing on the subspaces. We have produced experimental results using the proposed separation technique.

  • Prosodic Analysis and Modeling of Nagauta Singing to Generate Prosodic Contours from Standard Scores

    Nobuaki MINEMATSU  Bungo MATSUOKA  Keikichi HIROSE  

     
    PAPER

      Vol:
    E87-D No:5
      Page(s):
    1093-1101

    Nagauta (長唄) is one of the classical styles of Japanese singing. It has very original and unique prosodic patterns, where abrupt and sharp changes of F0 are often observed at mora (Japanese speech unit) transitions. This F0 change is sometimes found even within a single mora. In this paper, we propose a model to synthesize this unique F0 pattern by considering the abrupt and sharp changes as grace notes. Nagauta's original scores contain no strict descriptions of tones and durations. Therefore, the baseline melody realized in a performance depends on the singer and it is difficult to predict the baseline melody by looking only at the scores. In this paper, the baseline melody is explicitly given to a singer in the form of the standard notation and the singer is asked to sing the song in Nagauta style. By taking the standard score as input, the proposed model simulates the F0 pattern generated by the singer under this condition. Further, this paper shows an interesting phenomenon about power movements at the sharp F0 changes. Acoustic analysis of Nagauta singing samples reveals that the sharp increases of F0 and the sharp decreases of power are synchronized. Although no discussion on physiological mechanisms of this phenomenon is done in this paper, another model is proposed to generate the unique power patterns. Evaluation experiments are done with young Japanese listeners and their results indicate high validity of the two proposed models.

  • Tone Recognition of Continuous Mandarin Speech Based on Tone Nucleus Model and Neural Network

    Xiao-Dong WANG  Keikichi HIROSE  Jin-Song ZHANG  Nobuaki MINEMATSU  

     
    PAPER-Pattern Recognition

      Vol:
    E91-D No:6
      Page(s):
    1748-1755

    A method was developed for automatic recognition of syllable tone types in continuous speech of Mandarin by integrating two techniques, tone nucleus modeling and neural network classifier. The tone nucleus modeling considers a syllable F0 contour as consisting of three parts: onset course, tone nucleus, and offset course. Two courses are transitions from/to neighboring syllable F0 contours, while the tone nucleus is intrinsic part of the F0 contour. By viewing only the tone nucleus, acoustic features less affected by neighboring syllables are obtained. When using the tone nucleus modeling, automatic detection of tone nucleus comes crucial. An improvement was added to the original detection method. Distinctive acoustic features for tone types are not limited to F0 contours. Other prosodic features, such as waveform power and syllable duration, are also useful for tone recognition. Their heterogeneous features are rather difficult to be handled simultaneously in hidden Markov models (HMM), but are easy in neural networks. We adopted multi-layer perceptron (MLP) as a neural network. Tone recognition experiments were conducted for speaker dependent and independent cases. In order to show the effect of integration, experiments were conducted also for two baselines: HMM classifier with tone nucleus modeling, and MLP classifier viewing entire syllable instead of tone nucleus. The integrated method showed 87.1% of tone recognition rate in speaker dependent case, and 80.9% in speaker independent case, which was about 10% relative error reduction as compared to the baselines.

  • Tensor Factor Analysis for Arbitrary Speaker Conversion

    Daisuke SAITO  Nobuaki MINEMATSU  Keikichi HIROSE  

     
    PAPER-Speech and Hearing

      Pubricized:
    2020/03/13
      Vol:
    E103-D No:6
      Page(s):
    1395-1405

    This paper describes a novel approach to flexible control of speaker characteristics using tensor representation of multiple Gaussian mixture models (GMM). In voice conversion studies, realization of conversion from/to an arbitrary speaker's voice is one of the important objectives. For this purpose, eigenvoice conversion (EVC) based on an eigenvoice GMM (EV-GMM) was proposed. In the EVC, a speaker space is constructed based on GMM supervectors which are high-dimensional vectors derived by concatenating the mean vectors of each of the speaker GMMs. In the speaker space, each speaker is represented by a small number of weight parameters of eigen-supervectors. In this paper, we revisit construction of the speaker space by introducing the tensor factor analysis of training data set. In our approach, each speaker is represented as a matrix of which the row and the column respectively correspond to the dimension of the mean vector and the Gaussian component. The speaker space is derived by the tensor factor analysis of the set of the matrices. Our approach can solve an inherent problem of supervector representation, and it improves the performance of voice conversion. In addition, in this paper, effects of speaker adaptive training before factorization are also investigated. Experimental results of one-to-many voice conversion demonstrate the effectiveness of the proposed approach.

  • Duration Modeling with Decreased Intra-Group Temporal Variation for HMM-Based Phoneme Recognition

    Nobuaki MINEMATSU  Keikichi HIROSE  

     
    PAPER

      Vol:
    E78-D No:6
      Page(s):
    654-661

    A new clustering method was proposed to increase the effect of duration modeling on the HMM-based phoneme recognition. A precise observation on the temporal correspondences between a phoneme HMM with output probabilities by single Gaussian modeling and its training data indicated that there were two extreme cases, one with several types of correspondences in a phoneme class completely different from each other, and the other with only one type of correspondence. Although duration modeling was commonly used to incorporate the temporal information in the HMMs, a good modeling could not be obtained for the former case. Further observation for phoneme HMMs with output probabilities by Gaussian mixture modeling also showed that some HMMs still had multiple temporal correspondences, though the number of such phonemes was reduced as compared to the case of single Gaussian modeling. An appropriate duration modeling cannot be obtained for these phoneme HMMs by the conventional methods, where the duration distribution for each HMM state is represented by a distribution function. In order to cope with the problem, a new method was proposed which was based on the clustering of phoneme classes with plural types of temporal correspondences into sub-classes. The clustering was conducted so as to reduce the variations of the temporal correspondences in sub-classes. After the clustering, an HMM was constructed for each sub-class. Using the proposed method, speaker dependent recognition experiments were performed for phonemes segmented from isolated words. A few-percent increase was realized in the recognition rate, which was not obtained by another method based on the duration modeling with a Gaussian mixture.