The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Hong Kook KIM(13hit)

1-13hit
  • Procedural Constraints in the Extended RBAC and the Coloured Petri Net Modeling

    Wook SHIN  Jeong-Gun LEE  Hong Kook KIM  Kouichi SAKURAI  

     
    LETTER

      Vol:
    E88-A No:1
      Page(s):
    327-330

    This paper presents the Coloured Petri Net modeling for security analysis of the Extended Role Based Access Control systems.

  • A MFCC-Based CELP Speech Coder for Server-Based Speech Recognition in Network Environments

    Jae Sam YOON  Gil Ho LEE  Hong Kook KIM  

     
    PAPER-Speech/Audio Processing

      Vol:
    E90-A No:3
      Page(s):
    626-632

    Existing standard speech coders can provide high quality speech communication. However, they tend to degrade the performance of automatic speech recognition (ASR) systems that use the reconstructed speech. The main cause of the degradation is in that the linear predictive coefficients (LPCs), which are typical spectral envelope parameters in speech coding, are optimized to speech quality rather than to the performance of speech recognition. In this paper, we propose a speech coder using mel-frequency cepstral coefficients (MFCCs) instead of LPCs to improve the performance of a server-based speech recognition system in network environments. To develop the proposed speech coder with a low-bit rate, we first explore the interframe correlation of MFCCs, which results in the predictive quantization of MFCC. Second, a safety-net scheme is proposed to make the MFCC-based speech coder robust to channel errors. As a result, we propose an 8.7 kbps MFCC-based CELP coder. It is shown that the proposed speech coder has a comparable speech quality to 8 kbps G.729 and the ASR system using the proposed speech coder gives the relative word error rate reduction by 6.8% as compared to the ASR system using G.729 on a large vocabulary task (AURORA4).

  • Dynamic Cepstral Representations Based on Order-Dependent Windowing Methods

    Hong Kook KIM  Seung Ho CHOI  Hwang Soo LEE  

     
    PAPER-Speech Processing and Acoustics

      Vol:
    E81-D No:5
      Page(s):
    434-440

    In this paper, we propose dynamic cepstral representations to effectively capture the temporal information of cepstral coefficients. The number of speech frames for the regression analysis to extract a dynamic cepstral coefficient is inversely proportional to the cepstral order since the cepstral coefficients of higher orders are more fluctuating than those of lower orders. By exploiting the relationship between the window length for extracting a dynamic cepstral coefficient and the statistical variance of the cepstral coefficient, we propose three kinds of windowing methods in this work: an utterance-specific variance-ratio windowing method, a statistical variance-ratio windowing method, and an inverse-lifter windowing method. Intra-speaker, inter-speaker, and speaker-independent recognition tests on 100 phonetically balanced words are carried out to evaluate the performance of the proposed order-dependent windowing methods.

  • A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    Yoo Rhee OH  Hong Kook KIM  

     
    PAPER-Adaptation

      Vol:
    E93-D No:9
      Page(s):
    2379-2387

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  • Spectral Peak-Weighted Liftering of Cepstral Coefficients for Speech Recognition

    Hong Kook KIM  Hwang Soo LEE  

     
    PAPER-Speech and Hearing

      Vol:
    E83-D No:7
      Page(s):
    1540-1549

    In this paper, we propose a peak-weighted cepstral lifter (PWL) for enhancing the spectral peaks of an all-pole model spectrum in the cepstral domain. The design parameter of the PWL is the degree of pole enhancement or pole shifting toward the unit circle. The optimal pole shifting factor is chosen by considering the sensitivity to spectral resonance peaks, the variability of cepstral variances, and the recognition accuracy. Next, we generalize the PWL so that the optimal shifting factor is adaptively determined in frame-by-frame basis. Compared with other cepstral lifters, a speech recognizer employing the frame-adaptive PWL provides better recognition performance.

  • Reducing Speech Noise for Patients with Dysarthria in Noisy Environments

    Woo KYEONG SEONG  Ji HUN PARK  Hong KOOK KIM  

     
    PAPER-Speech and Hearing

      Vol:
    E97-D No:11
      Page(s):
    2881-2887

    Dysarthric speech results from damage to the central nervous system involving the articulator, which can mainly be characterized by poor articulation due to irregular sub-glottal pressure, loudness bursts, phoneme elongation, and unexpected pauses during utterances. Since dysarthric speakers have physical disabilities due to the impairment of their nervous system, they cannot easily control electronic devices. For this reason, automatic speech recognition (ASR) can be a convenient interface for dysarthric speakers to control electronic devices. However, the performance of dysarthric ASR severely degrades when there is background noise. Thus, in this paper, we propose a noise reduction method that improves the performance of dysarthric ASR. The proposed method selectively applies either a Wiener filtering algorithm or a Kalman filtering algorithm according to the result of voiced or unvoiced classification. Then, the performance of the proposed method is compared to a conventional Wiener filtering method in terms of ASR accuracy.

  • Compensation of Speech Coding Distortion for Wireless Speech Recognition

    Hong Kook KIM  

     
    LETTER-Speech and Hearing

      Vol:
    E87-D No:6
      Page(s):
    1596-1600

    In this paper, we perform some experiments to show that the quantization noise caused by low-bit-rate speech coding can be characterized as a white noise process. Then, the signal-to-quantization noise ratio of the decoded speech for a given bit-rate is estimated by observing the perceptual speech quality equivalent to the artificially generated noisy speech obtained by adding a white Gaussian noise source. This information is incorporated into the parameter tuning of a noise-robust compensation algorithm for speech recognition so that the compensation algorithm can be performed better under a range of the estimated SNRs. Finally, we apply the compensation algorithm to a connected digit string recognition system that utilizes speech signals decoded by the GSM adaptive multi-rate (AMR) speech coder. It is shown that the noise-robust compensation algorithm reduces word error rates by 15% or more at low bit-rate modes of the AMR speech coder.

  • Harmonic Model Based Excitation Enhancement for Low-Bit-Rate Speech Coding

    Hong Kook KIM  Mi Suk LEE  Chul Hong KWON  

     
    LETTER-Speech and Hearing

      Vol:
    E87-D No:7
      Page(s):
    1974-1977

    A new excitation enhancement technique based on a harmonic model is proposed in this paper to improve the speech quality of low-bit-rate speech coders. This technique is employed only in the decoding process of speech coders and improves high-frequency components of excitation. We develop the procedure of harmonic model parameters estimation and harmonic generation and apply the technique to a current state-of-art low bit rate speech coder. Experiments on spectrum reading and spectrum distortion measurement show that the proposed excitation enhancement technique improves speech quality.

  • Phonetically Balanced Text Corpus Design Using a Similarity Measure for a Stereo Super-Wideband Speech Database

    Yoo Rhee OH  Yong Guk KIM  Mina KIM  Hong Kook KIM  Mi Suk LEE  Hyun Joo BAE  

     
    PAPER-Speech and Hearing

      Vol:
    E94-D No:7
      Page(s):
    1459-1466

    In this paper, we propose a text corpus design method for a Korean stereo super-wideband speech database. Since a small-sized text corpus for speech coding is generally required for speech coding, the corpus should be designed to comply with the pronunciation behavior of natural conversation in order to ensure efficient speech quality tests. To this end, the proposed design method utilizes a similarity measure between the phoneme distribution occurring from natural conversation and that from the designed text corpus. In order to achieve this goal, we first collect and refine text data from textbooks and websites. Next, a corpus is designed from the refined text data based on the similarity measure to compare phoneme distributions. We then construct a Korean stereo super-wideband speech (K-SW) database using the designed text corpus, where the recording environment is set to meet the conditions defined by ITU-T. Finally, the subjective quality of the K-SW database is evaluated using an ITU-T super-wideband codec in order to demonstrate that the K-SW database is useful for developing and evaluating super-wideband codecs.

  • HMM-Based Mask Estimation for a Speech Recognition Front-End Using Computational Auditory Scene Analysis

    Ji Hun PARK  Jae Sam YOON  Hong Kook KIM  

     
    LETTER-Speech and Hearing

      Vol:
    E91-D No:9
      Page(s):
    2360-2364

    In this paper, we propose a new mask estimation method for the computational auditory scene analysis (CASA) of speech using two microphones. The proposed method is based on a hidden Markov model (HMM) in order to incorporate an observation that the mask information should be correlated over contiguous analysis frames. In other words, HMM is used to estimate the mask information represented as the interaural time difference (ITD) and the interaural level difference (ILD) of two channel signals, and the estimated mask information is finally employed in the separation of desired speech from noisy speech. To show the effectiveness of the proposed mask estimation, we then compare the performance of the proposed method with that of a Gaussian kernel-based estimation method in terms of the performance of speech recognition. As a result, the proposed HMM-based mask estimation method provided an average word error rate reduction of 61.4% when compared with the Gaussian kernel-based mask estimation method.

  • A Statistical Approach to Error Compensation in Spectral Quantization

    Seung Ho CHOI  Hong Kook KIM  

     
    LETTER-Speech and Hearing

      Vol:
    E90-D No:9
      Page(s):
    1460-1464

    In this paper, we propose a statistical approach to improve the performance of spectral quantization of speech coders. The proposed techniques compensate for the distortion in a decoded line spectrum pair (LSP) vector based on a statistical mapping function between a decoded LSP vector and its corresponding original LSP vector. We first develop two codebook-based probabilistic matching (CBPM) methods by investigating the distribution of LSP vectors. In addition, we propose an iterative procedure for the two CBPMs. Next, the proposed techniques are applied to the predictive vector quantizer (PVQ) used for the IS-641 speech coder. The experimental results show that the proposed techniques reduce average spectral distortion by around 0.064 dB and the percentage of outliers compared with the PVQ without any compensation, resulting in transparent quality of spectral quantization. Finally, the comparison of speech quality using the perceptual evaluation of speech quality (PESQ) measure is performed and it is shown that the IS-641 speech coder employing the proposed techniques has better decoded speech quality than the standard IS-641 speech coder.

  • Bandwidth-Scalable Stereo Audio Coding Based on a Layered Structure

    Young Han LEE  Deok Su KIM  Hong Kook KIM  Jongmo SUNG  Mi Suk LEE  Hyun Joo BAE  

     
    LETTER-Speech and Hearing

      Vol:
    E92-D No:12
      Page(s):
    2540-2544

    In this paper, we propose a bandwidth-scalable stereo audio coding method based on a layered structure. The proposed stereo coding method encodes super-wideband (SWB) stereo signals and is able to decode either wideband (WB) stereo signals or SWB stereo signals, depending on the network congestion. The performance of the proposed stereo coding method is then compared with that of a conventional stereo coding method that separately decodes WB or SWB stereo signals, in terms of subjective quality, algorithmic delay, and computational complexity. Experimental results show that when stereo audio signals sampled at a rate of 32 kHz are compressed to 64 kbit/s, the proposed method provides significantly better audio quality with a 64-sample shorter algorithmic delay, and comparable computational complexity.

  • A Low-Bit-Rate Extension Algorithm to the 8 kbit/s CS-ACELP Based on Adaptive Fixed Codebook Modeling

    Hong Kook KIM  Hwang Soo LEE  

     
    PAPER-Speech Processing and Acoustics

      Vol:
    E82-D No:7
      Page(s):
    1087-1092

    In this paper, we propose an adaptive encoding method of fixed codebook in CELP coders and implement an adaptive fixed code-excited linear prediction (AF-CELP) speech coder as a low-bit-rate extension to the 8 kbit/s CS-ACELP. The AF-CELP can be implemented at low bit rates as well as low complexity by exploiting the fact that the fixed codebook contribution to the speech signal is periodic, as is the adaptive codebook (or pitch filter) contribution. Listening tests show that the 6.4 kbit/s AF-CELP has a comparable quality to the 8 kbit/s CS-ACELP under real environmental test conditions.