The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Kazuyuki MURASE(6hit)

1-6hit
  • A New Crossover Operator and Its Application to Artificial Neural Networks Evolution

    Md. Monirul ISLAM  Kazuyuki MURASE  

     
    PAPER-Algorithms

      Vol:
    E84-D No:9
      Page(s):
    1144-1154

    The design of artificial neural networks (ANNs) through simulated evolution has been investigated for many years. The use of genetic algorithms (GAs) for such evolution suffers a prominent problem known as the permutation problem or the competing convention problem. This paper proposes a new crossover operator, which we call the selected node crossover (SNX), to overcome the permutation problem of GAs for evolving ANNs. A GA-based evolutionary system (GANet) using the SNX for evolving three layered feedforward ANNs architecture with weight learning is described. GANet uses one crossover and one mutation operators sequentially. If the first operator is successful then the second operator is not applied. GANet is less dependent on user-defined control parameters than the conventional evolutionary methods. GANet is applied to a variety of benchmarks including large (26 class) to small (2 class) classification problems. The results show that GANet can produce compact ANN architectures with small classification errors.

  • Incremental Evolution with Learning to Develop the Control System of Autonomous Robots for Complex Task

    Md. Monirul ISLAM  Kazuyuki MURASE  

     
    PAPER-Artificial Intelligence, Cognitive Science

      Vol:
    E85-D No:7
      Page(s):
    1118-1129

    Incremental evolution with learning (IEWL) is proposed for the development of autonomous robots, and the validity of the method is evaluated with a real mobile robot to acquire a complex task. Development of the control system for a complex task, i.e., approaching toward a target object by avoiding obstacles in an environment, is incrementally carried out in two-stage. In the first-stage, controllers are developed to avoid obstacles in the environment. By using acquired knowledge of the first-stage, controllers are developed in the second-stage to approach toward the target object by avoiding obstacles in the environment. It is found that the use of learning in conjunction with incremental evolution is beneficial for maintaining diversity in the evolving population. The performances of two controllers, one developed by IEWL and the other developed by incremental evolution without learning (IENL), are compared on the given task. The experimental results show that robust performance is achieved when controllers are developed by IEWL.

  • A New Local Search Based Ant Colony Optimization Algorithm for Solving Combinatorial Optimization Problems

    Md. Rakib HASSAN  Md. Monirul ISLAM  Kazuyuki MURASE  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E93-D No:5
      Page(s):
    1127-1136

    Ant Colony Optimization (ACO) algorithms are a new branch of swarm intelligence. They have been applied to solve different combinatorial optimization problems successfully. Their performance is very promising when they solve small problem instances. However, the algorithms' time complexity increase and solution quality decrease for large problem instances. So, it is crucial to reduce the time requirement and at the same time to increase the solution quality for solving large combinatorial optimization problems by the ACO algorithms. This paper introduces a Local Search based ACO algorithm (LSACO), a new algorithm to solve large combinatorial optimization problems. The basis of LSACO is to apply an adaptive local search method to improve the solution quality. This local search automatically determines the number of edges to exchange during the execution of the algorithm. LSACO also applies pheromone updating rule and constructs solutions in a new way so as to decrease the convergence time. The performance of LSACO has been evaluated on a number of benchmark combinatorial optimization problems and results are compared with several existing ACO algorithms. Experimental results show that LSACO is able to produce good quality solutions with a higher rate of convergence for most of the problems.

  • A Dynamic Node Decaying Method for Pruning Artificial Neural Networks

    Md. SHAHJAHAN  Kazuyuki MURASE  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E86-D No:4
      Page(s):
    736-751

    This paper presents a dynamic node decaying method (DNDM) for layered artificial neural networks that is suitable for classification problems. Our purpose is not to minimize the total output error but to obtain high generalization ability with minimal structure. Users of the conventional back propagation (BP) learning algorithm can convert their program to the DNDM by simply inserting a few lines. This method is an extension of a previously proposed method to more general classification problems, and its validity is tested with recent standard benchmark problems. In addition, we analyzed the training process and the effects of various parameters. In the method, nodes in a layer compete for survival in an automatic process that uses a criterion. Relatively less important nodes are decayed gradually during BP learning while more important ones play larger roles until the best performance under given conditions is achieved. The criterion evaluates each node by its total influence on progress toward the upper layer, and it is used as the index for dynamic competitive decaying. Two additional criteria are used: Generalization Loss to measure over-fitting and Learning Progress to stop training. Determination of these criteria requires a few human interventions. We have applied this algorithm to several standard benchmark problems such as cancer, diabetes, heart disease, glass, and iris problems. The results show the effectiveness of the method. The classification error and size of the generated networks are comparable to those obtained by other methods that generally require larger modification, or complete rewriting, of the program from the conventional BP algorithm.

  • Neural Network Training Algorithm with Positive Correlation

    Md. SHAHJAHAN  Kazuyuki MURASE  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E88-D No:10
      Page(s):
    2399-2409

    In this paper, we present a learning approach, positive correlation learning (PCL), that creates a multilayer neural network with good generalization ability. A correlation function is added to the standard error function of back propagation learning, and the error function is minimized by a steepest-descent method. During training, all the unnecessary units in the hidden layer are correlated with necessary ones in a positive sense. PCL can therefore create positively correlated activities of hidden units in response to input patterns. We show that PCL can reduce the information on the input patterns and decay the weights, which lead to improved generalization ability. Here, the information is defined with respect to hidden unit activity since the hidden unit plays a crucial role in storing the information on the input patterns. That is, as previously proposed, the information is defined by the difference between the uncertainty of the hidden unit at the initial stage of learning and the uncertainty of the hidden unit at the final stage of learning. After deriving new weight update rules for the PCL, we applied this method to several standard benchmark classification problems such as breast cancer, diabetes and glass identification problems. Experimental results confirmed that the PCL produces positively correlated hidden units and reduces significantly the amount of information, resulting improved generalization ability.

  • A Pruning Algorithm for Training Cooperative Neural Network Ensembles

    Md. SHAHJAHAN  Kazuyuki MURASE  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E89-D No:3
      Page(s):
    1257-1269

    We present a training algorithm to create a neural network (NN) ensemble that performs classification tasks. It employs a competitive decay of hidden nodes in the component NNs as well as a selective deletion of NNs in ensemble, thus named a pruning algorithm for NN ensembles (PNNE). A node cooperation function of hidden nodes in each NN is introduced in order to support the decaying process. The training is based on the negative correlation learning that ensures diversity among the component NNs in ensemble. The less important networks are deleted by a criterion that indicates over-fitting. The PNNE has been tested extensively on a number of standard benchmark problems in machine learning, including the Australian credit card assessment, breast cancer, circle-in-the-square, diabetes, glass identification, ionosphere, iris identification, and soybean identification problems. The results show that classification performances of NN ensemble produced by the PNNE are better than or competitive to those by the conventional constructive and fixed architecture algorithms. Furthermore, in comparison to the constructive algorithm, NN ensemble produced by the PNNE consists of a smaller number of component NNs, and they are more diverse owing to the uniform training for all component NNs.