The search functionality is under construction.

Keyword Search Result

[Keyword] feedforward neural network(11hit)

1-11hit
  • Deep Learning Approaches for Pathological Voice Detection Using Heterogeneous Parameters

    JiYeoun LEE  Hee-Jin CHOI  

     
    LETTER-Speech and Hearing

      Pubricized:
    2020/05/14
      Vol:
    E103-D No:8
      Page(s):
    1920-1923

    We propose a deep learning-based model for classifying pathological voices using a convolutional neural network and a feedforward neural network. The model uses combinations of heterogeneous parameters, including mel-frequency cepstral coefficients, linear predictive cepstral coefficients and higher-order statistics. We validate the accuracy of this model using the Massachusetts Eye and Ear Infirmary (MEEI) voice disorder database and the Saarbruecken Voice Database (SVD). Our model achieved an accuracy of 99.3% for MEEI and 75.18% for SVD. This model achieved an accuracy that is 7.18% higher than that of competitive models in previous studies.

  • Small Number of Hidden Units for ELM with Two-Stage Linear Model

    Hieu Trung HUYNH  Yonggwan WON  

     
    PAPER-Data Mining

      Vol:
    E91-D No:4
      Page(s):
    1042-1049

    The single-hidden-layer feedforward neural networks (SLFNs) are frequently used in machine learning due to their ability which can form boundaries with arbitrary shapes if the activation function of hidden units is chosen properly. Most learning algorithms for the neural networks based on gradient descent are still slow because of the many learning steps. Recently, a learning algorithm called extreme learning machine (ELM) has been proposed for training SLFNs to overcome this problem. It randomly chooses the input weights and hidden-layer biases, and analytically determines the output weights by the matrix inverse operation. This algorithm can achieve good generalization performance with high learning speed in many applications. However, this algorithm often requires a large number of hidden units and takes long time for classification of new observations. In this paper, a new approach for training SLFNs called least-squares extreme learning machine (LS-ELM) is proposed. Unlike the gradient descent-based algorithms and the ELM, our approach analytically determines the input weights, hidden-layer biases and output weights based on linear models. For training with a large number of input patterns, an online training scheme with sub-blocks of the training set is also introduced. Experimental results for real applications show that our proposed algorithm offers high classification accuracy with a smaller number of hidden units and extremely high speed in both learning and testing.

  • A Learning Algorithm with Activation Function Manipulation for Fault Tolerant Neural Networks

    Naotake KAMIURA  Yasuyuki TANIGUCHI  Yutaka HATA  Nobuyuki MATSUI  

     
    PAPER-Fault Tolerance

      Vol:
    E84-D No:7
      Page(s):
    899-905

    In this paper we propose a learning algorithm to enhance the fault tolerance of feedforward neural networks (NNs for short) by manipulating the gradient of sigmoid activation function of the neuron. We assume stuck-at-0 and stuck-at-1 faults of the connection link. For the output layer, we employ the function with the relatively gentle gradient to enhance its fault tolerance. For enhancing the fault tolerance of hidden layer, we steepen the gradient of function after convergence. The experimental results for a character recognition problem show that our NN is superior in fault tolerance, learning cycles and learning time to other NNs trained with the algorithms employing fault injection, forcible weight limit and the calculation of relevance of each weight to the output error. Besides the gradient manipulation incorporated in our algorithm never spoils the generalization ability.

  • On a Weight Limit Approach for Enhancing Fault Tolerance of Feedforward Neural Networks

    Naotake KAMIURA  Teijiro ISOKAWA  Yutaka HATA  Nobuyuki MATSUI  Kazuharu YAMATO  

     
    PAPER-Fault Tolerance

      Vol:
    E83-D No:11
      Page(s):
    1931-1939

    To enhance fault tolerance ability of the feedforward neural networks (NNs for short) implemented in hardware, we discuss the learning algorithm that converges without adding extra neurons and a large amount of extra learning time and cycles. Our algorithm modified from the standard backpropagation algorithm (SBPA for short) limits synaptic weights of neurons in range during learning phase. The upper and lower bounds of the weights are calculated according to the average and standard deviation of them. Then our algorithm reupdates any weight beyond the calculated range to the upper or lower bound. Since the above enables us to decrease the standard deviation of the weights, it is useful in enhancing fault tolerance. We apply NNs trained with other algorithms and our one to a character recognition problem. It is shown that our one is superior to other ones in reliability, extra learning time and/or extra learning cycles. Besides we clarify that our algorithm never degrades the generalization ability of NNs although it coerces the weights within the calculated range.

  • Evolutional Design and Training Algorithm for Feedforward Neural Networks

    Hiroki TAKAHASHI  Masayuki NAKAJIMA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E82-D No:10
      Page(s):
    1384-1392

    In pattern recognition using neural networks, it is very difficult for researchers or users to design optimal neural network architecture for a specific task. It is possible for any kinds of neural network architectures to obtain a certain measure of recognition ratio. It is, however, difficult to get an optimal neural network architecture for a specific task analytically in the recognition ratio and effectiveness of training. In this paper, an evolutional method of training and designing feedforward neural networks is proposed. In the proposed method, a neural network is defined as one individual and neural networks whose architectures are same as one species. These networks are evaluated by normalized M. S. E. (Mean Square Error) which presents a performance of a network for training patterns. Then, their architectures evolve according to an evolution rule proposed here. Architectures of neural networks, in other words, species, are evaluated by another measurement of criteria compared with the criteria of individuals. The criteria assess the most superior individual in the species and the speed of evolution of the species. The species are increased or decreased in population size according to the criteria. The evolution rule generates a little bit different architectures of neural network from superior species. The proposed method, therefore, can generate variety of architectures of neural networks. The designing and training neural networks which performs simple 3 3 and 4 4 pixels which include vertical, horizontal and oblique lines classifications and Handwritten KATAKANA recognitions are presented. The efficiency of proposed method is also discussed.

  • Admissibility of Memorization Learning with Respect to Projection Learning in the Presence of Noise

    Akira HIRABAYASHI  Hidemitsu OGAWA  Yukihiko YAMASHITA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E82-D No:2
      Page(s):
    488-496

    In learning of feed-forward neural networks, so-called 'training error' is often minimized. This is, however, not related to the generalization capability which is one of the major goals in the learning. It can be interpreted as a substitute for another learning which considers the generalization capability. Admissibility is a concept to discuss whether a learning can be a substitute for another learning. In this paper, we discuss the case where the learning which minimizes a training error is used as a substitute for the projection learning, which considers the generalization capability, in the presence of noise. Moreover, we give a method for choosing a training set which satisfies the admissibility.

  • Dynamic Constructive Fault Tolerant Algorithm for Feedforward Neural Networks

    Nait Charif HAMMADI  Toshiaki OHMAMEUDA  Keiichi KANEKO  Hideo ITO  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E81-D No:1
      Page(s):
    115-123

    In this paper, a dynamic constructive algorithm for fault tolerant feedforward neural network, called DCFTA, is proposed. The algorithm starts with a network with single hidden neuron, and a new hidden unit is added dynamically to the network whenever it fails to converge. Before inserting the new hidden neuron into the network, only the weights connecting the new hidden neuron to the other neurons are trained (i. e. , updated) until there is no significant reduction of the output error. To generate a fault tolerant network, the relevance of each synaptic weight is estimated in each cycle, and only the weights which have their relevance less than a specified threshold are updated in that cycle. The loss of a connections between neurons (which are equivalent to stuck-at-0 faults) are assumed. The simulation results indicate that the network constructed by DCFTA has a significant fault tolerance ability.

  • On the Activation Function and Fault Tolerance in Feedforward Neural Networks

    Nait Charif HAMMADI  Hideo ITO  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E81-D No:1
      Page(s):
    66-72

    Considering the pattern classification/recognition tasks, the influence of the activation function on fault tolerance property of feedforward neural networks is empirically investigated. The simulation results show that the activation function largely influences the fault tolerance and the generalization property of neural networks. It is found that, neural networks with symmetric sigmoid activation function are largely fault tolerant than the networks with asymmetric sigmoid function. However the close relation between the fault tolerance and the generalization property was not observed and the networks with asymmetric activation function slightly generalize better than the networks with the symmetric activation function. First, the influence of the activation function on fault tolerance property of neural networks is investigated on the XOR problem, then the results are generalized by evaluating the fault tolerance property of different NNs implementing different benchmark problems.

  • A Learning Algorithm for Fault Tolerant Feedforward Neural Networks

    Nait Charif HAMMADI  Hideo ITO  

     
    PAPER-Redundancy Techniques

      Vol:
    E80-D No:1
      Page(s):
    21-27

    A new learning algorithm is proposed to enhance fault tolerance ability of the feedforward neural networks. The algorithm focuses on the links (weights) that may cause errors at the output when they are open faults. The relevances of the synaptic weights to the output error (i.e. the sensitivity of the output error to the weight fault) are estimated in each training cycle of the standard backpropagation using the Taylor expansion of the output around fault-free weights. Then the weight giving the maximum relevance is decreased. The approach taken by the algorithm described in this paper is to prevent the weights from having large relevances. The simulation results indicate that the network trained with the proposed algorithm do have significantly better fault tolerance than the network trained with the standard backpropagation algorithm. The simulation results show that the fault tolerance and the generalization abilities are improved.

  • Neural Networks with Interval Weights for Nonlinear Mappings of Interval Vectors

    Kitaek KWON  Hisao ISHIBUCHI  Hideo TANAKA  

     
    PAPER-Mapping

      Vol:
    E77-D No:4
      Page(s):
    409-417

    This paper proposes an approach for approximately realizing nonlinear mappings of interval vectors by interval neural networks. Interval neural networks in this paper are characterized by interval weights and interval biases. This means that the weights and biases are given by intervals instead of real numbers. First, an architecture of interval neural networks is proposed for dealing with interval input vectors. Interval neural networks with the proposed architecture map interval input vectors to interval output vectors by interval arithmetic. Some characteristic features of the nonlinear mappings realized by the interval neural networks are described. Next, a learning algorithm is derived. In the derived learning algorithm, training data are the pairs of interval input vectors and interval target vectors. Last, using a numerical example, the proposed approach is illustrated and compared with other approaches based on the standard back-propagation neural networks with real number weights.

  • AVHRR Image Segmentation Using Modified Backpropagation Algorithm

    Tao CHEN  Mikio TAKAGI  

     
    PAPER-Image Processing

      Vol:
    E77-D No:4
      Page(s):
    490-497

    Analysis of satellite images requires classificatio of image objects. Since different categories may have almost the same brightness or feature in high dimensional remote sensing data, many object categories overlap with each other. How to segment the object categories accurately is still an open question. It is widely recognized that the assumptions required by many classification methods (maximum likelihood estimation, etc.) are suspect for textural features based on image pixel brightness. We propose an image feature based neural network approach for the segmentation of AVHRR images. The learning algoriothm is a modified backpropagation with gain and weight decay, since feedforward networks using the backpropagation algorithm have been generally successful and enjoy wide popularity. Destructive algorithms that adapt the neural architecture during the training have been developed. The classification accuracy of 100% is reached for a validation data set. Classification result is compared with that of Kohonen's LVQ and basic backpropagation algorithm based pixel-by-pixel method. Visual investigation of the result images shows that our method can not only distinguish the categories with similar signatures very well, but also is robustic to noise.