1-1hit |
Eri YAMAGISHI Minako NOZAWA Yoshinori UESAKA
Conventional learning algorithms are considered to be a sort of estimation of the true recognition function from sample patterns. Such an estimation requires a good assumption on a prior distribution underlying behind learning data. On the other hand the human being sounds to be able to acquire a better result from an extremely small number of samples. This forces us to think that the human being might use a suitable prior (called presupposition here), which is an essential key to make recognition machines highly flexible. In the present paper we propose a framework for guessing the learner's presupposition used in his learning process based on his learning result. First it is pointed out that such a guess requires to assume what kind of estimation method the learner uses and that the problem of guessing the presupposition becomes in general ill-defined. With these in mind, the framework is given under the assumption that the learner utilizes the Bayesian estimation method, and a method how to determine the presupposition is demonstrated under two examples of constraints to both of a family of presuppositions and a set of recognition functions. Finally a simple example of learning with a presupposition is demonstrated to show that the guessed presupposition guarantees a better fitting to the samples and prevents a learning machine from falling into over learning.