1-1hit |
Rameswar DEBNATH Haruhisa TAKAHASHI
Structural learning algorithms are obtained by adding a penalty criterion (usually comes from the network structure) to the conventional criterion of the sum of squared errors and applying the backpropagation (BP) algorithm. This problem can be viewed as a constrained minimization problem. In this paper, we apply the Lagrangian differential gradient method to the structural learning based on the backpropagation-like algorithm. Computational experiments for both artificial and real data show that the improvement of generalization performance and the network optimization are obtained applying the proposed method.