The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] multilayer neural network(9hit)

1-9hit
  • Distinctive Phonetic Feature (DPF) Extraction Based on MLNs and Inhibition/Enhancement Network

    Mohammad Nurul HUDA  Hiroaki KAWASHIMA  Tsuneo NITTA  

     
    PAPER-Speech and Hearing

      Vol:
    E92-D No:4
      Page(s):
    671-680

    This paper describes a distinctive phonetic feature (DPF) extraction method for use in a phoneme recognition system; our method has a low computation cost. This method comprises three stages. The first stage uses two multilayer neural networks (MLNs): MLNLF-DPF, which maps continuous acoustic features, or local features (LFs), onto discrete DPF features, and MLNDyn, which constrains the DPF context at the phoneme boundaries. The second stage incorporates inhibition/enhancement (In/En) functionalities to discriminate whether the DPF dynamic patterns of trajectories are convex or concave, where convex patterns are enhanced and concave patterns are inhibited. The third stage decorrelates the DPF vectors using the Gram-Schmidt orthogonalization procedure before feeding them into a hidden Markov model (HMM)-based classifier. In an experiment on Japanese Newspaper Article Sentences (JNAS) utterances, the proposed feature extractor, which incorporates two MLNs and an In/En network, was found to provide a higher phoneme correct rate with fewer mixture components in the HMMs.

  • Learning Algorithms Which Make Multilayer Neural Networks Multiple-Weight-and-Neuron-Fault Tolerant

    Tadayoshi HORITA  Itsuo TAKANAMI  Masatoshi MORI  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E91-D No:4
      Page(s):
    1168-1175

    Two simple but useful methods, called the deep learning methods, for making multilayer neural networks tolerant to multiple link-weight and neuron-output faults, are proposed. The methods make the output errors in learning phase smaller than those in practical use. The abilities of fault-tolerance of the multilayer neural networks in practical use, are analyzed in the relationship between the output errors in learning phase and in practical use. The analytical result shows that the multilayer neural networks have complete (100%) fault-tolerance to multiple weight-and-neuron faults in practical use. The simulation results concerning the rate of successful learnings, the ability of fault-tolerance, and the learning time, are also shown.

  • A Novel Learning Algorithm Which Makes Multilayer Neural Networks Multiple-Weight-Fault Tolerant

    Itsuo TAKANAMI  Yasuhiro OYAMA  

     
    PAPER-Dependable Systems

      Vol:
    E86-D No:12
      Page(s):
    2536-2543

    We propose an efficient algorithm for making multi-layered neural networks (MLN) fault-tolerant to all multiple weight faults in a multi-dimensional interval by injecting intentionally two extreme multi-dimensional values in the interval into the weights of the selected multiple links in a learning phase. The degree of fault-tolerance to a multiple weight fault is measured by the number of essential multiple links. First, we analytically discuss how to choose effectively the multiple links to be injected, and present a learning algorithm for making MLNs fault tolerant to all multiple (i.e., simultaneous) faults in the interval defined by two multi-dimensional extreme points. Then it is proved that after the learning algorithm successfully finishes, MLNs become fault tolerant to all multiple faults in the interval. It is also shown that the time in a weight modification cycle depends little on multiplicity of faults k for small k. These are confirmed by simulation.

  • Multilayer Network Learning Algorithm Based on Pattern Search Method

    Xu-Gang WANG  Zheng TANG  Hiroki TAMURA  Masahiro ISHII  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E86-A No:7
      Page(s):
    1869-1875

    A new multilayer artificial neural network learning algorithm based on the pattern search method is proposed. The learning algorithm is designed to provide a very simple and effective means of searching the minima of an objective function directly without any knowledge of its derivatives. We test this algorithm on benchmark problems, such as exclusive-or (XOR), parity and alphabetic character learning problems. For all problems, the systems are shown to be trained efficiently by our algorithm. As a simple direct search algorithm, it can be applied to hardware implementations easily.

  • Introducing an Adaptive VLR Algorithm Using Learning Automata for Multilayer Perceptron

    Behbood MASHOUFI  Mohammad Bagher MENHAJ  Sayed A. MOTAMEDI  Mohammad R. MEYBODI  

     
    PAPER-Algorithms

      Vol:
    E86-D No:3
      Page(s):
    594-609

    One of the biggest limitations of BP algorithm is its low rate of convergence. The Variable Learning Rate (VLR) algorithm represents one of the well-known techniques that enhance the performance of the BP. Because the VLR parameters have important influence on its performance, we use learning automata (LA) to adjust them. The proposed algorithm named Adaptive Variable Learning Rate (AVLR) algorithm dynamically tunes the VLR parameters by learning automata according to the error changes. Simulation results on some practical problems such as sinusoidal function approximation, nonlinear system identification, phoneme recognition, Persian printed letter recognition helped us better to judge the merit of the proposed AVLR method.

  • A Training Algorithm for Multilayer Neural Networks of Hard-Limiting Units with Random Bias

    Hongbing ZHU  Kei EGUCHI  Toru TABATA  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1040-1048

    The conventional back-propagation algorithm cannot be applied to networks of units having hard-limiting output functions, because these functions cannot be differentiated. In this paper, a gradient descent algorithm suitable for training multilayer feedforward networks of units having hard-limiting output functions, is presented. In order to get a differentiable output function for a hard-limiting unit, we utilized that if the bias of a unit in such a network is a random variable with smooth distribution function, the probability of the unit's output being in a particular state is a continuously differentiable function of the unit's inputs. Three simulation results are given, which show that the performance of this algorithm is similar to that of the conventional back-propagation.

  • Multilayer Neural Network with Threshold Neurons

    Hiroomi HIKAWA  Kazuo SATO  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:6
      Page(s):
    1105-1112

    In this paper, a new architecture of Multilayer Neural Network (MNN) with on-chip learning for effective hardware implementation is proposed. To reduce the circuit size, threshold function is used as neuron's activating function and simplified back-propagation algorithm is employed to provide on-chip learning capability. The derivative of the activating function is modified to improve the rate of successful learning. The learning performance of the proposed architecture is tested by system-level simulations. Simulation results show that the modified derivative function improves the rate of successful learning and that the proposed MNN has a good generalization capability. Furthermore, the proposed architecture is implemented on field programmable gate array (FPGA). Logic-level simulation and preliminary experiment are conducted to test the on-chip learning mechanism.

  • Training Data Selection Method for Generalization by Multilayer Neural Networks

    Kazuyuki HARA  Kenji NAKAYAMA  

     
    PAPER

      Vol:
    E81-A No:3
      Page(s):
    374-381

    A training data selection method is proposed for multilayer neural networks (MLNNs). This method selects a small number of the training data, which guarantee both generalization and fast training of the MLNNs applied to pattern classification. The generalization will be satisfied using the data locate close to the boundary of the pattern classes. However, if these data are only used in the training, convergence is slow. This phenomenon is analyzed in this paper. Therefore, in the proposed method, the MLNN is first trained using some number of the data, which are randomly selected (Step 1). The data, for which the output error is relatively large, are selected. Furthermore, they are paired with the nearest data belong to the different class. The newly selected data are further paired with the nearest data. Finally, pairs of the data, which locate close to the boundary, can be found. Using these pairs of the data, the MLNNs are further trained (Step 2). Since, there are some variations to combine Steps 1 and 2, the proposed method can be applied to both off-line and on-line training. The proposed method can reduce the number of the training data, at the same time, can hasten the training. Usefulness is confirmed through computer simulation.

  • Multi-Frequency Signal Classification by Multilayer Neural Networks and Linear Filter Methods

    Kazuyuki HARA  Kenji NAKAYAMA  

     
    PAPER-Neural Networks

      Vol:
    E80-A No:5
      Page(s):
    894-902

    This paper compares signal classification performance of multilayer neural networks (MLNNs) and linear filters (LFs). The MLNNs are useful for arbitrary waveform signal classification. On the other hand, LFS are useful for the signals, which are specified with frequency components. In this paper, both methods are compared based on frequency selective performance. The signals to be classified contain several frequency components. Furthermore, effects of the number of the signal samples are investigated. In this case, the frequency information may be lost to some extent. This makes the classification problems difficult. From practical viewpoint, computational complexity is also limited to the same level in both methods.IIR and FIR filters are compared. FIR filters with a direct form can save computations, which is independent of the filter order. IIR filters, on the other hand, cannot provide good signal classification deu to their phase distortion, and require a large amount of computations due to their recursive structure. When the number of the input samples is strictly limited, the signal vectors are widely distributed in the multi-dimensional signal space. In this case, signal classification by the LF method cannot provide a good performance. Because, they are designed to extract the frequency components. On the other hand, the MLNN method can form class regions in the signal vector space with high degree of freedom.