The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Yoshinori UESAKA(2hit)

1-2hit
  • On the Human Being Presupposition Used in Learning

    Eri YAMAGISHI  Minako NOZAWA  Yoshinori UESAKA  

     
    PAPER-Neural Nets and Human Being

      Vol:
    E79-A No:10
      Page(s):
    1601-1607

    Conventional learning algorithms are considered to be a sort of estimation of the true recognition function from sample patterns. Such an estimation requires a good assumption on a prior distribution underlying behind learning data. On the other hand the human being sounds to be able to acquire a better result from an extremely small number of samples. This forces us to think that the human being might use a suitable prior (called presupposition here), which is an essential key to make recognition machines highly flexible. In the present paper we propose a framework for guessing the learner's presupposition used in his learning process based on his learning result. First it is pointed out that such a guess requires to assume what kind of estimation method the learner uses and that the problem of guessing the presupposition becomes in general ill-defined. With these in mind, the framework is given under the assumption that the learner utilizes the Bayesian estimation method, and a method how to determine the presupposition is demonstrated under two examples of constraints to both of a family of presuppositions and a set of recognition functions. Finally a simple example of learning with a presupposition is demonstrated to show that the guessed presupposition guarantees a better fitting to the samples and prevents a learning machine from falling into over learning.

  • Mathematical Aspects of Neuro-Dynamics for Combinatorial Optimization

    Yoshinori UESAKA  

     
    INVITED PAPER

      Vol:
    E74-A No:6
      Page(s):
    1368-1372

    Hopfield's neural networks are known to have a potentiality to solve combinatorial optimization problems. It is however found that the networks often fail to get the optimum solution. The present paper intends to clarify the exact cause of such failure from a mathematical point of view. A normal form of the objective function to be minimized is introduced. It is shown that almost all of combinatorial optimization problems can be reduced to minimum search problems of real-valued quadratic functions of two-valued variables excluding linear terms. A dynamical system, which is implemented by an idealized neural network consisting of massively connected neurons, is induced from the extension of the objective function to a multidimensional Euclidean space. The asymptotically stable states of this dynamical system are shown to lie in the vertices of a hyper cube that correspond to minimal points of the objective function. Hence, if the initial state were rightly selected, then the state of the dynamical system would approach to the minimum point of the objective function, and the corresponding optimization problem could be completely solved. This indicates that only the problem how to find a right initial state remains to be investigated. Through computer simulation, a conjecture on initial state selection is given such that the probability for the minimum search with a randomly selected initial state from the hyper cube, of which center is located at the origin of the state space, to be successful converges to 1 as the cube becomes 0 in size.