The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Taira NAKAJIMA(5hit)

1-5hit
  • A Topology Preserving Neural Network for Nonstationary Distributions

    Taira NAKAJIMA  Hiroyuki TAKIZAWA  Hiroaki KOBAYASHI  Tadao NAKAMURA  

     
    LETTER-Bio-Cybernetics and Neurocomputing

      Vol:
    E82-D No:7
      Page(s):
    1131-1135

    We propose a learning algorithm for self-organizing neural networks to form a topology preserving map from an input manifold whose topology may dynamically change. Experimental results show that the network using the proposed algorithm can rapidly adjust itself to represent the topology of nonstationary input distributions.

  • An Active Learning Algorithm Based on Existing Training Data

    Hiroyuki TAKIZAWA  Taira NAKAJIMA  Hiroaki KOBAYASHI  Tadao NAKAMURA  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E83-D No:1
      Page(s):
    90-99

    A multilayer perceptron is usually considered a passive learner that only receives given training data. However, if a multilayer perceptron actively gathers training data that resolve its uncertainty about a problem being learnt, sufficiently accurate classification is attained with fewer training data. Recently, such active learning has been receiving an increasing interest. In this paper, we propose a novel active learning strategy. The strategy attempts to produce only useful training data for multilayer perceptrons to achieve accurate classification, and avoids generating redundant training data. Furthermore, the strategy attempts to avoid generating temporarily useful training data that will become redundant in the future. As a result, the strategy can allow multilayer perceptrons to achieve accurate classification with fewer training data. To demonstrate the performance of the strategy in comparison with other active learning strategies, we also propose an empirical active learning algorithm as an implementation of the strategy, which does not require expensive computations. Experimental results show that the proposed algorithm improves the classification accuracy of a multilayer perceptron with fewer training data than that for a conventional random selection algorithm that constructs a training data set without explicit strategies. Moreover, the algorithm outperforms typical active learning algorithms in the experiments. Those results show that the algorithm can construct an appropriate training data set at lower computational cost, because training data generation is usually costly. Accordingly, the algorithm proves the effectiveness of the strategy through the experiments. We also discuss some drawbacks of the algorithm.

  • Vector Quantization Codebook Design Using the Law-of-the-Jungle Algorithm

    Hiroyuki TAKIZAWA  Taira NAKAJIMA  Kentaro SANO  Hiroaki KOBAYASHI  Tadao NAKAMURA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:6
      Page(s):
    1068-1077

    The equidistortion principle[1] has recently been proposed as a basic principle for design of an optimal vector quantization (VQ) codebook. The equidistortion principle adjusts all codebook vectors such that they have the same contribution to quantization error. This paper introduces a novel VQ codebook design algorithm based on the equidistortion principle. The proposed algorithm is a variant of the law-of-the-jungle algorithm (LOJ), which duplicates useful codebook vectors and removes useless vectors. Due to the LOJ mechanism, the proposed algorithm can establish the equidistortion condition without wasting learning steps. This is significantly effective in preventing performance degradation caused when initial states of codebook vectors are improper to find an optimal codebook. Therefore, even in the case of improper initialization, the proposed algorithm can achieve minimization of quantization error based on the equidistortion principle. Performance of the proposed algorithm is discussed through experimental results.

  • Kohonen Learning with a Mechanism, the Law of the Jungle, Capable of Dealing with Nonstationary Probability Distribution Functions

    Taira NAKAJIMA  Hiroyuki TAKIZAWA  Hiroaki KOBAYASHI  Tadao NAKAMURA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E81-D No:6
      Page(s):
    584-591

    We present a mechanism, named the law of the jungle (LOJ), to improve the Kohonen learning. The LOJ is used to be an adaptive vector quantizer for approximating nonstationary probability distribution functions. In the LOJ mechanism, the probability that each node wins in a competition is dynamically estimated during the learning. By using the estimated win probability, "strong" nodes are increased through creating new nodes near the nodes, and "weak" nodes are decreased through deleting themselves. A pair of creation and deletion is treated as an atomic operation. Therefore, the nodes which cannot win the competition are transferred directly from the region where inputs almost never occur to the region where inputs often occur. This direct "jump" of weak nodes provides rapid convergence. Moreover, the LOJ requires neither time-decaying parameters nor a special periodic adaptation. From the above reasons, the LOJ is suitable for quick approximation of nonstationary probability distribution functions. In comparison with some other Kohonen learning networks through experiments, only the LOJ can follow nonstationary probability distributions except for under high-noise environments.

  • Acceleration Techniques for the Network Inversion Algorithm

    Hiroyuki TAKIZAWA  Taira NAKAJIMA  Masaaki NISHI  Hiroaki KOBAYASHI  Tadao NAKAMURA  

     
    LETTER-Bio-Cybernetics and Neurocomputing

      Vol:
    E82-D No:2
      Page(s):
    508-511

    We apply two acceleration techniques for the backpropagation algorithm to an iterative gradient descent algorithm called the network inversion algorithm. Experimental results show that these techniques are also quite effective to decrease the number of iterations required for the detection of input vectors on the classification boundary of a multilayer perceptron.