A new regularization cost function for generalization in real-valued function learning is proposed. This cost function is derived from the maximum likelihood method using a modified sample distribution, and consists of a sum of square errors and a stabilizer which is a function of integrated square derivatives. Each of the regularization parameters which gives the minimum estimation error can be obtained uniquely and non-empirically. The parameters are not constants and change in value during learning. Numerical simulation shows that this cost function predicts the true error accurately and is effective in neural network learning.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Miki YAMADA, "A Regularization Method for Neural Network Learning that Minimizes Estimation Error" in IEICE TRANSACTIONS on Information,
vol. E77-D, no. 4, pp. 418-424, April 1994, doi: .
Abstract: A new regularization cost function for generalization in real-valued function learning is proposed. This cost function is derived from the maximum likelihood method using a modified sample distribution, and consists of a sum of square errors and a stabilizer which is a function of integrated square derivatives. Each of the regularization parameters which gives the minimum estimation error can be obtained uniquely and non-empirically. The parameters are not constants and change in value during learning. Numerical simulation shows that this cost function predicts the true error accurately and is effective in neural network learning.
URL: https://global.ieice.org/en_transactions/information/10.1587/e77-d_4_418/_p
Copy
@ARTICLE{e77-d_4_418,
author={Miki YAMADA, },
journal={IEICE TRANSACTIONS on Information},
title={A Regularization Method for Neural Network Learning that Minimizes Estimation Error},
year={1994},
volume={E77-D},
number={4},
pages={418-424},
abstract={A new regularization cost function for generalization in real-valued function learning is proposed. This cost function is derived from the maximum likelihood method using a modified sample distribution, and consists of a sum of square errors and a stabilizer which is a function of integrated square derivatives. Each of the regularization parameters which gives the minimum estimation error can be obtained uniquely and non-empirically. The parameters are not constants and change in value during learning. Numerical simulation shows that this cost function predicts the true error accurately and is effective in neural network learning.},
keywords={},
doi={},
ISSN={},
month={April},}
Copy
TY - JOUR
TI - A Regularization Method for Neural Network Learning that Minimizes Estimation Error
T2 - IEICE TRANSACTIONS on Information
SP - 418
EP - 424
AU - Miki YAMADA
PY - 1994
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E77-D
IS - 4
JA - IEICE TRANSACTIONS on Information
Y1 - April 1994
AB - A new regularization cost function for generalization in real-valued function learning is proposed. This cost function is derived from the maximum likelihood method using a modified sample distribution, and consists of a sum of square errors and a stabilizer which is a function of integrated square derivatives. Each of the regularization parameters which gives the minimum estimation error can be obtained uniquely and non-empirically. The parameters are not constants and change in value during learning. Numerical simulation shows that this cost function predicts the true error accurately and is effective in neural network learning.
ER -