The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

Robust Performance Using Cascaded Artificial Neural Network Architecture

Joarder KAMRUZZAMAN, Yukio KUMAGAI, Hiromitsu HIKITA

  • Full Text Views

    0

  • Cite this

Summary :

It has been reported that generalization performance of multilayer feedformard networks strongly depends on the attainment of saturated hidden outputs in response to the training set. Usually standard Backpropagation (BP) network mostly uses intermediate values of hidden units as the internal representation of the training patterns. In this letter, we propose construction of a 3-layer cascaded network in which two 2-layer networks are first trained independently by delta rule and then cascaded. After cascading, the intermediate layer can be viewed as hidden layer which is trained to attain preassigned saturated outputs in response to the training set. This network is particularly easier to construct for linearly separable training set, and can also be constructed for nonlinearly separable tasks by using higher order inputs at the input layer or by assigning proper codes at the intermediate layer which can be obtained from a trained Fahlman and Lebiere's network. Simulation results show that, at least, when the training set is linearly separable, use of the proposed cascaded network significantly enhances the generalization performance compared to BP network, and also maintains high generalization ability for nonlinearly separable training set. Performance of cascaded network depending on the preassigned codes at the intermediate layer is discussed and a suggestion about the preassigned coding is presented.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E76-A No.6 pp.1023-1030
Publication Date
1993/06/25
Publicized
Online ISSN
DOI
Type of Manuscript
LETTER
Category
Digital Signal Processing

Authors

Keyword