The search functionality is under construction.
The search functionality is under construction.

A Regularization Method for Neural Network Learning that Minimizes Estimation Error

Miki YAMADA

  • Full Text Views

    0

  • Cite this

Summary :

A new regularization cost function for generalization in real-valued function learning is proposed. This cost function is derived from the maximum likelihood method using a modified sample distribution, and consists of a sum of square errors and a stabilizer which is a function of integrated square derivatives. Each of the regularization parameters which gives the minimum estimation error can be obtained uniquely and non-empirically. The parameters are not constants and change in value during learning. Numerical simulation shows that this cost function predicts the true error accurately and is effective in neural network learning.

Publication
IEICE TRANSACTIONS on Information Vol.E77-D No.4 pp.418-424
Publication Date
1994/04/25
Publicized
Online ISSN
DOI
Type of Manuscript
Special Section PAPER (Special Issue on Neurocomputing)
Category
Regularization

Authors

Keyword