1-2hit |
Tsubasa OCHIAI Shigeki MATSUDA Hideyuki WATANABE Xugang LU Chiori HORI Hisashi KAWAI Shigeru KATAGIRI
Among various training concepts for speaker adaptation, Speaker Adaptive Training (SAT) has been successfully applied to a standard Hidden Markov Model (HMM) speech recognizer, whose state is associated with Gaussian Mixture Models (GMMs). On the other hand, focusing on the high discriminative power of Deep Neural Networks (DNNs), a new type of speech recognizer structure, which combines DNNs and HMMs, has been vigorously investigated in the speaker adaptation research field. Along these two lines, it is natural to conceive of further improvement to a DNN-HMM recognizer by employing the training concept of SAT. In this paper, we propose a novel speaker adaptation scheme that applies SAT to a DNN-HMM recognizer. Our SAT scheme allocates a Speaker Dependent (SD) module to one of the intermediate layers of DNN, treats its remaining layers as a Speaker Independent (SI) module, and jointly trains the SD and SI modules while switching the SD module in a speaker-by-speaker manner. We implement the scheme using a DNN-HMM recognizer, whose DNN has seven layers, and elaborate its utility over TED Talks corpus data. Our experimental results show that in the supervised adaptation scenario, our Speaker-Adapted (SA) SAT-based recognizer reduces the word error rate of the baseline SI recognizer and the lowest word error rate of the SA SI recognizer by 8.4% and 0.7%, respectively, and by 6.4% and 0.6% in the unsupervised adaptation scenario. The error reductions gained by our SA-SAT-based recognizers proved to be significant by statistical testing. The results also show that our SAT-based adaptation outperforms, regardless of the SD module layer selection, its counterpart SI-based adaptation, and that the inner layers of DNN seem more suitable for SD module allocation than the outer layers.
Hideyuki WATANABE Shigeru KATAGIRI
In general cases of pattern recognition, a pattern to be recognized is first represented by a set of features and the measured values of the features are then classified. Finding features relevant to recognition is thus an important issue in recognizer design. As a fundamental design framework taht systematically enables one to realize such useful features, the Subspace Method (SM) has been extensively used in various recognition tasks. However, this promising methodological framework is still inadequate. The discriminative power of early versions was not very high. The training behavior of a recent discriminative version called the Learning Subspace Method has not been fully clarified due to its empirical definition, though its discriminative power has been improved. To alleviate this insufficiency, we propose in this paper a new discriminative SM algorithm based on the Minimum Classification Error/Generalized Probabilistic Descent method and show that the proposed algorithm achieves an optimal accurate recognition result, i.e., the (at least locally) minimum recognition error situation, in the probabilistic descent sense.