It is thought that we have generally succeeded in establishing learning algorithms for neural networks, such as the back-propagation algorithm. However two major issues remain to be solved. First, there are possibilities of being trapped at a local minimum in learning. Second, the convergence rate is too slow. Chang and Ghaffar proposed to add a new hidden node, whenever stopping at a local minimum, and restart to train the new net until the error converges to zero. Their method designs newly generated weights so that the new net after introducing a new hidden node has less error than that at the original local minimum. In this paper, we propose a new method that improves their convergence rate. Our proposed method is expected to give a lower system error and a larger error gradient magnitude than their method at a starting point of the new net, which leads to a faster convergence rate. Actually, it is shown through numerical examples that the proposed method gives a much better performance than the conventional Chang and Ghaffar's method.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Teruo AJIMURA, Isao YAMADA, Kohichi SAKANIWA, "A Fast Neural Network Learning with Guaranteed Convergence to Zero System Error" in IEICE TRANSACTIONS on Fundamentals,
vol. E79-A, no. 9, pp. 1433-1439, September 1996, doi: .
Abstract: It is thought that we have generally succeeded in establishing learning algorithms for neural networks, such as the back-propagation algorithm. However two major issues remain to be solved. First, there are possibilities of being trapped at a local minimum in learning. Second, the convergence rate is too slow. Chang and Ghaffar proposed to add a new hidden node, whenever stopping at a local minimum, and restart to train the new net until the error converges to zero. Their method designs newly generated weights so that the new net after introducing a new hidden node has less error than that at the original local minimum. In this paper, we propose a new method that improves their convergence rate. Our proposed method is expected to give a lower system error and a larger error gradient magnitude than their method at a starting point of the new net, which leads to a faster convergence rate. Actually, it is shown through numerical examples that the proposed method gives a much better performance than the conventional Chang and Ghaffar's method.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/e79-a_9_1433/_p
Copy
@ARTICLE{e79-a_9_1433,
author={Teruo AJIMURA, Isao YAMADA, Kohichi SAKANIWA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={A Fast Neural Network Learning with Guaranteed Convergence to Zero System Error},
year={1996},
volume={E79-A},
number={9},
pages={1433-1439},
abstract={It is thought that we have generally succeeded in establishing learning algorithms for neural networks, such as the back-propagation algorithm. However two major issues remain to be solved. First, there are possibilities of being trapped at a local minimum in learning. Second, the convergence rate is too slow. Chang and Ghaffar proposed to add a new hidden node, whenever stopping at a local minimum, and restart to train the new net until the error converges to zero. Their method designs newly generated weights so that the new net after introducing a new hidden node has less error than that at the original local minimum. In this paper, we propose a new method that improves their convergence rate. Our proposed method is expected to give a lower system error and a larger error gradient magnitude than their method at a starting point of the new net, which leads to a faster convergence rate. Actually, it is shown through numerical examples that the proposed method gives a much better performance than the conventional Chang and Ghaffar's method.},
keywords={},
doi={},
ISSN={},
month={September},}
Copy
TY - JOUR
TI - A Fast Neural Network Learning with Guaranteed Convergence to Zero System Error
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1433
EP - 1439
AU - Teruo AJIMURA
AU - Isao YAMADA
AU - Kohichi SAKANIWA
PY - 1996
DO -
JO - IEICE TRANSACTIONS on Fundamentals
SN -
VL - E79-A
IS - 9
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - September 1996
AB - It is thought that we have generally succeeded in establishing learning algorithms for neural networks, such as the back-propagation algorithm. However two major issues remain to be solved. First, there are possibilities of being trapped at a local minimum in learning. Second, the convergence rate is too slow. Chang and Ghaffar proposed to add a new hidden node, whenever stopping at a local minimum, and restart to train the new net until the error converges to zero. Their method designs newly generated weights so that the new net after introducing a new hidden node has less error than that at the original local minimum. In this paper, we propose a new method that improves their convergence rate. Our proposed method is expected to give a lower system error and a larger error gradient magnitude than their method at a starting point of the new net, which leads to a faster convergence rate. Actually, it is shown through numerical examples that the proposed method gives a much better performance than the conventional Chang and Ghaffar's method.
ER -