The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] over-fitting(2hit)

1-2hit
  • A Pruning Algorithm for Training Cooperative Neural Network Ensembles

    Md. SHAHJAHAN  Kazuyuki MURASE  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E89-D No:3
      Page(s):
    1257-1269

    We present a training algorithm to create a neural network (NN) ensemble that performs classification tasks. It employs a competitive decay of hidden nodes in the component NNs as well as a selective deletion of NNs in ensemble, thus named a pruning algorithm for NN ensembles (PNNE). A node cooperation function of hidden nodes in each NN is introduced in order to support the decaying process. The training is based on the negative correlation learning that ensures diversity among the component NNs in ensemble. The less important networks are deleted by a criterion that indicates over-fitting. The PNNE has been tested extensively on a number of standard benchmark problems in machine learning, including the Australian credit card assessment, breast cancer, circle-in-the-square, diabetes, glass identification, ionosphere, iris identification, and soybean identification problems. The results show that classification performances of NN ensemble produced by the PNNE are better than or competitive to those by the conventional constructive and fixed architecture algorithms. Furthermore, in comparison to the constructive algorithm, NN ensemble produced by the PNNE consists of a smaller number of component NNs, and they are more diverse owing to the uniform training for all component NNs.

  • Realization of Admissibility for Supervised Learning

    Akira HIRABAYASHI  Hidemitsu OGAWA  Akiko NAKASHIMA  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E83-D No:5
      Page(s):
    1170-1176

    In supervised learning, one of the major learning methods is memorization learning (ML). Since it reduces only the training error, ML does not guarantee good generalization capability in general. When ML is used, however, acquiring good generalization capability is expected. This usage of ML was interpreted by one of the present authors, H. Ogawa, as a means of realizing 'true objective learning' which directly takes generalization capability into account, and introduced the concept of admissibility. If a learning method can provide the same generalization capability as a true objective learning, it is said that the objective learning admits the learning method. Hence, if admissibility does not hold, making it hold becomes important. In this paper, we introduce the concept of realization of admissibility, and devise a realization method of admissibility of ML with respect to projection learning which directly takes generalization capability into account.