The search functionality is under construction.
The search functionality is under construction.

On a Weight Limit Approach for Enhancing Fault Tolerance of Feedforward Neural Networks

Naotake KAMIURA, Teijiro ISOKAWA, Yutaka HATA, Nobuyuki MATSUI, Kazuharu YAMATO

  • Full Text Views

    0

  • Cite this

Summary :

To enhance fault tolerance ability of the feedforward neural networks (NNs for short) implemented in hardware, we discuss the learning algorithm that converges without adding extra neurons and a large amount of extra learning time and cycles. Our algorithm modified from the standard backpropagation algorithm (SBPA for short) limits synaptic weights of neurons in range during learning phase. The upper and lower bounds of the weights are calculated according to the average and standard deviation of them. Then our algorithm reupdates any weight beyond the calculated range to the upper or lower bound. Since the above enables us to decrease the standard deviation of the weights, it is useful in enhancing fault tolerance. We apply NNs trained with other algorithms and our one to a character recognition problem. It is shown that our one is superior to other ones in reliability, extra learning time and/or extra learning cycles. Besides we clarify that our algorithm never degrades the generalization ability of NNs although it coerces the weights within the calculated range.

Publication
IEICE TRANSACTIONS on Information Vol.E83-D No.11 pp.1931-1939
Publication Date
2000/11/25
Publicized
Online ISSN
DOI
Type of Manuscript
PAPER
Category
Fault Tolerance

Authors

Keyword