The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Naoki ISU(4hit)

1-4hit
  • Production of LSP Parameter Sequences for Speech Synthesis Based on Neural Network Approach

    Tadaaki SHIMIZU  Hiroki YOSHIMURA  Yoshihiko SHINDO  Naoki ISU  Kazuhiro SUGATA  

     
    LETTER

      Vol:
    E80-A No:8
      Page(s):
    1467-1471

    This paper presents a generating method of LSP parameter sequences for speech synthesis by rule. In our method, neural networks are schemed to generate LSP parameter sequences of Vowel-Consonant-Vowel (VCV) units. The quality of synthesized speech by concatenation way of VCV units through table-look-up technique can not be improved so much owing to the distortion appearing on VCV units junction. In our method, the neural networks concatenate VCV units step by step with less distortion on VCV units junction, which synthesizes good quality speech.

  • Construction of Noise Reduction Filter by Use of Sandglass-Type Neural Network

    Hiroki YOSHIMURA  Tadaaki SHIMIZU  Naoki ISU  Kazuhiro SUGATA  

     
    PAPER

      Vol:
    E80-A No:8
      Page(s):
    1384-1390

    A noise reduction filter composed of a sandglass-type neural network (Sandglass-type Neural network Noise Reduction Filter: SNNRF) was proposed in the present paper. Sandglass-type neural network (SNN) has symmetrical layer construction, and consists of the same number of units in input and output layers and less number of units in a hidden layer. It is known that SNN has the property of processing signals which is equivalent to KL expansion after learning. We applied the recursive least square (RLS) method to learning of SNNRF, so that the SNNRF became able to process on-line noise reduction. This paper showed theoretically that SNNRF behaves most optimally when the number of units in the hidden layer is equal to the rank of covariance matrix of signal component included in input signal. Computer experiments confirmed that SNNRF acquired appropriate characteristics for noise reduction from input signals, and remarkably improved the SN ratio of the signals.

  • A Method for Reinforcing Noun Countability Prediction

    Ryo NAGATA  Atsuo KAWAI  Koichiro MORIHIRO  Naoki ISU  

     
    PAPER-Natural Language Processing

      Vol:
    E90-D No:12
      Page(s):
    2077-2086

    This paper proposes a method for reinforcing noun countability prediction, which plays a crucial role in demarcating correct determiners in machine translation and error detection. The proposed method reinforces countability prediction by introducing a novel heuristics called one countability per discourse. It claims that when a noun appears more than once in a discourse, all instances will share identical countability. The basic idea of the proposed method is that mispredictions can be corrected by efficiently using one countability per discourse heuristics. Experiments show that the proposed method successfully reinforces countability prediction and outperforms other methods used for comparison. In addition to its performance, it has two advantages over earlier methods: (i) it is applicable to any countability prediction method, and (ii) it requires no human intervention to reinforce countability prediction.

  • A Statistical Model Based on the Three Head Words for Detecting Article Errors

    Ryo NAGATA  Tatsuya IGUCHI  Fumito MASUI  Atsuo KAWAI  Naoki ISU  

     
    PAPER-Educational Technology

      Vol:
    E88-D No:7
      Page(s):
    1700-1706

    In this paper, we propose a statistical model for detecting article errors, which Japanese learners of English often make in English writing. It is based on the three head words--the verb head, the preposition, and the noun head. To overcome the data sparseness problem, we apply the backed-off estimate to it. Experiments show that its performance (F-measure=0.70) is better than that of other methods. Apart from the performance, it has two advantages: (i) Rules for detecting article errors are automatically generated as conditional probabilities once a corpus is given; (ii) Its recall and precision rates are adjustable.