The search functionality is under construction.
The search functionality is under construction.

Neural Network Model Switching for Efficient Feature Extraction

Keisuke KAMEYAMA, Yukio KOSUGI

  • Full Text Views

    0

  • Cite this

Summary :

In order to improve the efficiency of the feature extraction of backpropagation (BP) learning in layered neural networks, model switching for changing the function model without altering the map is proposed. Model switching involves map preserving reduction of units by channel fusion, or addition of units by channel installation. For reducing the model size by channel fusion, two criteria for detection of the redundant channels are addressed, and the local link weight compensations for map preservation are formulated. The upper limits of the discrepancies between the maps of the switched models are derived for use as the unified criterion in selecting the switching model candidate. In the experiments, model switching is used during the BP training of a layered network model for image texture classification, to aid its inefficiency of feature extraction. The results showed that fusion and re-installation of redundant channels, weight compensations on channel fusion for map preservation, and the use of the unified criterion for model selection are all effective for improved generalization ability and quick learning. Further, the possibility of using model switching for concurrent optimization of the model and the map will be discussed.

Publication
IEICE TRANSACTIONS on Information Vol.E82-D No.10 pp.1372-1383
Publication Date
1999/10/25
Publicized
Online ISSN
DOI
Type of Manuscript
PAPER
Category
Image Processing,Computer Graphics and Pattern Recognition

Authors

Keyword