The search functionality is under construction.
The search functionality is under construction.

A Fast Neural Network Learning with Guaranteed Convergence to Zero System Error

Teruo AJIMURA, Isao YAMADA, Kohichi SAKANIWA

  • Full Text Views

    0

  • Cite this

Summary :

It is thought that we have generally succeeded in establishing learning algorithms for neural networks, such as the back-propagation algorithm. However two major issues remain to be solved. First, there are possibilities of being trapped at a local minimum in learning. Second, the convergence rate is too slow. Chang and Ghaffar proposed to add a new hidden node, whenever stopping at a local minimum, and restart to train the new net until the error converges to zero. Their method designs newly generated weights so that the new net after introducing a new hidden node has less error than that at the original local minimum. In this paper, we propose a new method that improves their convergence rate. Our proposed method is expected to give a lower system error and a larger error gradient magnitude than their method at a starting point of the new net, which leads to a faster convergence rate. Actually, it is shown through numerical examples that the proposed method gives a much better performance than the conventional Chang and Ghaffar's method.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E79-A No.9 pp.1433-1439
Publication Date
1996/09/25
Publicized
Online ISSN
DOI
Type of Manuscript
Special Section PAPER (Special Section on Information Theory and Its Applications)
Category
Stochastic Process/Learning

Authors

Keyword