The search functionality is under construction.
The search functionality is under construction.

A Learning Algorithm with Activation Function Manipulation for Fault Tolerant Neural Networks

Naotake KAMIURA, Yasuyuki TANIGUCHI, Yutaka HATA, Nobuyuki MATSUI

  • Full Text Views

    0

  • Cite this

Summary :

In this paper we propose a learning algorithm to enhance the fault tolerance of feedforward neural networks (NNs for short) by manipulating the gradient of sigmoid activation function of the neuron. We assume stuck-at-0 and stuck-at-1 faults of the connection link. For the output layer, we employ the function with the relatively gentle gradient to enhance its fault tolerance. For enhancing the fault tolerance of hidden layer, we steepen the gradient of function after convergence. The experimental results for a character recognition problem show that our NN is superior in fault tolerance, learning cycles and learning time to other NNs trained with the algorithms employing fault injection, forcible weight limit and the calculation of relevance of each weight to the output error. Besides the gradient manipulation incorporated in our algorithm never spoils the generalization ability.

Publication
IEICE TRANSACTIONS on Information Vol.E84-D No.7 pp.899-905
Publication Date
2001/07/01
Publicized
Online ISSN
DOI
Type of Manuscript
PAPER
Category
Fault Tolerance

Authors

Keyword