The search functionality is under construction.
The search functionality is under construction.

Geometry of Admissible Parameter Region in Neural Learning

Kazushi IKEDA, Shun-Ichi AMARI

  • Full Text Views

    0

  • Cite this

Summary :

In general, a learning machine will behave better as the number of training examples increases. It is important to know how fast and how well the behavior is improved. The average prediction error, the average of the probability that the trained machine mispredicts the output signal, is one of the most popular criteria to see the behavior. However, it is not easy to evaluate the average prediction error even in the simplest case, that is, the linear dichotomy (perceptron) case. When a continuous deterministic dichotomy machine is trained by t examples of input-output pairs produced from a realizable teacher, these examples limits the region of the parameter space which includes the true parameter. Any parameter in the region can explain the input-output behaviors of the examples. Such a region, celled the admissible region, forms in general a (curved) polyhedron in the parameter space, and it becomes smaller and smaller as the number of examples increases. The present paper studies the shape and volume of the admissible region. We use the stochastic geometrical approach to this problem. We have studied the stochastic geometrical features of the admissible region using the fact that it is dual to the convex hull the examples make in the example space. Since the admissible region is related to the average prediction error of the linear dichotomy, we derived the new upper and lower bounds of the average prediction error.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E79-A No.6 pp.938-943
Publication Date
1996/06/25
Publicized
Online ISSN
DOI
Type of Manuscript
PAPER
Category
Neural Networks

Authors

Keyword