The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] multi-layered neural network(2hit)

1-2hit
  • A Rule-Embedded Neural-Network and Its Effectiveness in Pattern Recognition with -Posed Conditions

    Mina MARUYAMA  Nobuo TSUDA  Kiyoshi NAKABAYASHI  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E78-D No:2
      Page(s):
    152-162

    This paper describes an advanced rule-embedded neural network (RENN+) that has an extended framework for achieving a very tight integration of learning-based neural networks and rule-bases of existing if-then rules. The RENN+ is effective in pattern recognition with ill-posed conditions. It is basically composed of several component RENNs and an output RENN, which are three-layer back-propagation (BP) networks except for the input layer. Each RENN can be pre-organized by embedding the if-then rules through translation of the rules into logic functions in a disjunctive normal form, and can be trainded to acquire adaptive rules as required. A weight-modification-reduced learning algorithm (WMR) capable of standard regularization is used for the post-training to suppress excessive modification of the weights for the embedded rules. To estimate the effectiveness of the proposed RENN+, it was used for pattern recognition in a radar system for detection of buried pipes. This trial showed that a RENN+ with two component RENNs had good recognition capability, whereas a conventional BP network was ineffective.

  • Fast Convergent Genetic-Type Search for Multi-Layered Network

    Shu-Hung LEUNG  Andrew LUK  Sin-Chun NG  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:9
      Page(s):
    1484-1492

    The classical supervised learning algorithms for optimizing multi-layered feedforward neural networks, such at the original back-propagation algorithm, suffer from several weaknesses. First, they have the possibility of being trapped at local minima during learning, which may lead to failure in finding the global optimal solution. Second, the convergence rate is typically too slow even if the learning can be achieved. This paper introduces a new learning algorithm which employs a genetic-type search during the learning phase of back-propagation algorithm so that the above problems can be overcome. The basic idea is to evolve the network weights in a controlled manner so as to jump to the regions of smaller mean squared error whenever the back-propagation stops at a local minimum. By this, the local minima can always be escaped and a much faster learning with global optimal solution can be achieved. A mathematical framework on the weight evolution of the new algorithm in also presented in this paper, which gives a careful analysis on the requirements of weight evolution (or perturbation) during learning in order to achieve a better error performance in the weights between different hidden layers. Simulation results on three typical problems including XOR, 3-bit parity and the counting problem are described to illustrate the fast learning behaviour and the global search capability of the new algorithm in improving the performance of back-propagated network.