1-2hit |
We present a training algorithm to create a neural network (NN) ensemble that performs classification tasks. It employs a competitive decay of hidden nodes in the component NNs as well as a selective deletion of NNs in ensemble, thus named a pruning algorithm for NN ensembles (PNNE). A node cooperation function of hidden nodes in each NN is introduced in order to support the decaying process. The training is based on the negative correlation learning that ensures diversity among the component NNs in ensemble. The less important networks are deleted by a criterion that indicates over-fitting. The PNNE has been tested extensively on a number of standard benchmark problems in machine learning, including the Australian credit card assessment, breast cancer, circle-in-the-square, diabetes, glass identification, ionosphere, iris identification, and soybean identification problems. The results show that classification performances of NN ensemble produced by the PNNE are better than or competitive to those by the conventional constructive and fixed architecture algorithms. Furthermore, in comparison to the constructive algorithm, NN ensemble produced by the PNNE consists of a smaller number of component NNs, and they are more diverse owing to the uniform training for all component NNs.
Akira HIRABAYASHI Hidemitsu OGAWA Akiko NAKASHIMA
In supervised learning, one of the major learning methods is memorization learning (ML). Since it reduces only the training error, ML does not guarantee good generalization capability in general. When ML is used, however, acquiring good generalization capability is expected. This usage of ML was interpreted by one of the present authors, H. Ogawa, as a means of realizing 'true objective learning' which directly takes generalization capability into account, and introduced the concept of admissibility. If a learning method can provide the same generalization capability as a true objective learning, it is said that the objective learning admits the learning method. Hence, if admissibility does not hold, making it hold becomes important. In this paper, we introduce the concept of realization of admissibility, and devise a realization method of admissibility of ML with respect to projection learning which directly takes generalization capability into account.