The search functionality is under construction.

Keyword Search Result

[Keyword] SoC(334hit)

301-320hit(334hit)

  • Associative Memory Model with Forgetting Process Using Nonmonotonic Neurons

    Kazushi MIMURA  Masato OKADA  Koji KURATA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E81-D No:11
      Page(s):
    1298-1304

    An associative memory model with a forgetting process a la Mezard et al. is investigated for a piecewise nonmonotonic output function by the SCSNA proposed by Shiino and Fukai. Similar to the formal monotonic two-state model analyzed by Mezard et al. , the discussed nonmonotonic model is also free from a catastrophic deterioration of memory due to overloading. We theoretically obtain a relationship between the storage capacity and the forgetting rate, and find that there is an optimal value of forgetting rate, at which the storage capacity is maximized for the given nonmonotonicity. The maximal storage capacity and capacity ratio (a ratio of the storage capacity for the conventional correlation learning rule to the maximal storage capacity) increase with nonmonotonicity, whereas the optimal forgetting rate decreases with nonmonotonicity.

  • A Genetic Algorithm Creates New Attractors in an Associative Memory Network by Pruning Synapses Adaptively

    Akira IMADA  Keijiro ARAKI  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E81-D No:11
      Page(s):
    1290-1297

    We apply evolutionary algorithms to neural network model of associative memory. In the model, some of the appropriate configurations of the synaptic weights allow the network to store a number of patterns as an associative memory. For example, the so-called Hebbian rule prescribes one such configuration. However, if the number of patterns to be stored exceeds a critical amount (over-loaded), the ability to store patterns collapses more or less. Or, synaptic weights chosen at random do not have such an ability. In this paper, we describe a genetic algorithm which successfully evolves both the random synapses and over-loaded Hebbian synapses to function as associative memory by adaptively pruning some of the synaptic connections. Although many authors have shown that the model is robust against pruning a fraction of synaptic connections, improvement of performance by pruning has not been explored, as far as we know.

  • Associative Semantic Memory Capable of Fast Inference on Conceptual Hierarchies

    Qing MA  Hitoshi ISAHARA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E81-D No:6
      Page(s):
    572-583

    The adaptive associative memory proposed by Ma is used to construct a new model of semantic network, referred to as associative semantic memory (ASM). The main novelty is its computational effectiveness which is an important issue in knowledge representation; the ASM can do inference based on large conceptual hierarchies extremely fast-in time that does not increase with the size of conceptual hierarchies. This performance cannot be realized by any existing systems. In addition, ASM has a simple and easily understandable architecture and is flexible in the sense that modifying knowledge can easily be done using one-shot relearning and the generalization of knowledge is a basic system property. Theoretical analyses are given in general case to guarantee that ASM can flawlessly infer via pattern segmentation and recovery which are the two basic functions that the adaptive associative memory has.

  • Optimal Design of Hopfield-Type Associative Memory by Adaptive Stability-Growth Method

    Xue-Bin LIANG  Toru YAMAGUCHI  

     
    LETTER-Bio-Cybernetics and Neurocomputing

      Vol:
    E81-D No:1
      Page(s):
    148-150

    An adaptive stability-growth (ASG) learning algorithm is proposed for improving, as much as possible, the stability of a Hopfield-type associative memory. While the ASG algorithm can be used to determine the optimal stability instead of the well-known minimum-overlap (MO) learning algorithm with sufficiently large lower bound for MO value, it converges much more quickly than the MO algorithm in real implementation. Therefore, the proposed ASG algorithm is more suitable than the MO algorithm for real-world design of an optimal Hopfield-type associative memory.

  • A Stochastic Associative Memory Using Single-Electron Tunneling Devices

    Makoto SAEN  Takashi MORIE  Makoto NAGATA  Atsushi IWATA  

     
    PAPER

      Vol:
    E81-C No:1
      Page(s):
    30-35

    This paper proposes a new associative memory architecture using stochastic behavior in single electron tunneling (SET) devices. This memory stochastically extracts the pattern most similar to the input key pattern from the stored patterns in two matching modes: the voltage-domain matching mode and the time-domain one. In the former matching mode, ordinary associative memory operation can be performed. In the latter matching mode, a purely stochastic search can be performed. Even in this case, by repeating numerous searching trials, the order of similarity can be obtained. We propose a circuit using SET devices based on this architecture and demonstrate its basic operation with a simulation. By feeding the output pattern back to the input, this memory retrieves slightly dissimilar patterns consecutively. This function may be the key to developing highly intelligent information processing systems close to the human brain.

  • Functionality Enhancement in Elemental Devices for Implementing Intelligence on Integrated Circuits

    Tadahiro OHMI  Tadashi SHIBATA  

     
    INVITED PAPER

      Vol:
    E80-C No:7
      Page(s):
    841-848

    An alternative approach to increasing the functional capability of an integrated circuit chip other than the conventional scaling approach is presented and discussed. We will show the functional enhancement at a very elementary device level is essential in implementing intelligent functions at a system level. The concept of a four-terminal device is reviewed as a guiding principle in considering the device functionality enhancement. As an example of a four-terminal device, the neuron MOS transistor is presented. Applications of neuron MOS transistors to several new architecture circuits are demonstrated and the possibility of implementing intelligent functions directly on integrated circuit hardware is discussed.

  • Learning Time of Linear Associative Memory

    Toshiyuki TANAKA  Hideki KURIYAMA  Yoshiko OCHIAI  Masao TAKI  

     
    PAPER-Neural Networks

      Vol:
    E80-A No:6
      Page(s):
    1150-1156

    Neural networks can be used as associative memories which can learn problems of acquiring input-output relations presented by examples. The learning time problem addresses how long it takes for a neural network to learn a given problem by a learning algorithm. As a solvable model to this problem we analyze the learning dynamics of the linear associative memoty with the least-mean-square algorithm. Our result shows that the learning time τ of the linear associative memory diverges in τ (1-ρ)-2 as the memory rate ρ approaches 1. It also shows that the learning time exhibits the exponential dependence on ρ when ρ is small.

  • A Synergetic Neural Network with Crosscorrelation Dynamics

    Masahiro NAKAGAWA  

     
    PAPER-Neural Networks

      Vol:
    E80-A No:5
      Page(s):
    881-893

    In this study we shall put forward a bidirectional synergetic neural network and investigate the crossassociation dynamics in an order parameter space. The present model is substantially based on a top-down formulation of the dynamic rule of an analog neural network in the analogy with the conventional bidirectional associative memory. It is proved that a complete association can be assured up to the same number of the embedded patterns as the number of neurons. In addition, a searching process of a couple of embedded patterns can be also realised by means of controlling attraction parameters as seen in the autoassociative synergetic models.

  • Factorization of String Polynomials

    Kazuyoshi MORI  Saburou IIDA  

     
    PAPER

      Vol:
    E80-A No:4
      Page(s):
    670-681

    A factorization method for a string polynomial called the constant method is proposed. This uses essentially three operations; classification of monomials, gcrd (greatest common right divisor), and lcrm (least common rigth multiple). This method can be applied to string polynomials except that their constants cannot be reduced to zeros by the linear transformation of variables. To factorize such excluded string polynomials, the naive method is also presented, which computes simply coefficients of two factors of a given polynomial, but is not efficient.

  • Image Associative Memory by Recurrent Neural Subnetworks

    Wfadysfaw SKARBEK  Andrzej CICHOCKI  

     
    PAPER-Neural Nets and Human Being

      Vol:
    E79-A No:10
      Page(s):
    1638-1646

    Gray scale images are represented by recurrent neural subnetworks which together with a competition layer create an associative memory. The single recurrent subnetwork Ni implements a stochastic nonlinear fractal operator Fi, constructed for the given image fi. We show that under realstic assumptions F has a unique attractor which is located in the vicinity of the original image. Therefore one subnetwork represents one original image. The associative recall is implemented in two stages. Firstly, the competition layer finds the most invariant subnetwork for the given input noisy image g. Next, the selected recurrent subnetwork in few (5-10) global iterations produces high quality approximation of the original image. The degree of invariance for the subnetwork Ni on the inprt g is measured by a norm ||g-Fi(g)||. We have experimentally verified that associative recall for images of natural scenes with pixel values in [0, 255] is successful even when Gaussian noise has the standard deviation σ as large as 500. Moreover, the norm, computed only on 10% of pixels chosen randomly from images still successfuly recalls a close approximation of original image. Comparing to Amari-Hopfield associative memory, our solution has no spurious states, is less sensitive to noise, and its network complexity is significantly lower. However, for each new stored image a new subnetwork must be added.

  • Cellular Neural Networks with Multiple-Valued Output and Its Application

    Akihiro KANAGAWA  Hiroaki KAWABATA  Hiromitsu TAKAHASHI  

     
    LETTER

      Vol:
    E79-A No:10
      Page(s):
    1658-1663

    Various applications of cellular neural network (CNN) are reported such as a feature extraction of the patterns, an extraction of the edges or corners of a figure, noise exclusion, searching in maze and so forth. In this paper, we propose a cellular neural network whose each cell has more than two output levels. By using the output function which has several saturated levels, each cell turns to have several output states. The multiple-valued CNN enhances its associative memory function so as to express various kinds of aspects. We report an application of the enhanced asscociative memory function to a diagnosis of the liver troubles.

  • A Local Property of the Phasor Model of Neural Networks

    Masahiro AGU  Kazuo YAMANAKA  Hiroki TAKAHASHI  

     
    LETTER-Bio-Cybernetics and Neurocomputing

      Vol:
    E79-D No:8
      Page(s):
    1209-1211

    Stable phase locked states" are found amongst the equiliblia of the phasor model known as a generalized Hopfield model having complex-valued local states on the unit circle with centre at the origin. The asynchronous updating rule is assumed, and the energy decreasing characteristic is used to investigate a property of the equilibrium states. Some of the equilibria are shown to be fragile" in the sense that the energy is not locally convex. It is also shown that the local convexity of the energy is assured by a sort of consistency between the equilibrium and the connection weights.

  • A 5-mW, 10-ns Cycle TLB Using a High-Performance CAM with Low-Power Match-Detection Circuits

    Hisayuki HIGUCHI  Suguru TACHIBANA  Masataka MINAMI  Takahiro NAGANO  

     
    PAPER-Static RAMs

      Vol:
    E79-C No:6
      Page(s):
    757-762

    Low-power, high-speed match-detection circuits for a content addressable memory(CAM) are proposed and evaluated. The circuits consist a current supply to a match-line, a differential amplifier, and 9-MOSFET CAM cells. The implementation of these circuits made it possible to realize a 16-entry, 32-bit data-compare CAM TEG of 1.2-ns matchdetection time with 5-mW power dissipation in 10-ns cycle-time.

  • On-Line Fault Diagnosis by Using Fuzzy Cognitive Map

    Keesang LEE  Sungho KIM  Masatoshi SAKAWA  

     
    PAPER-Reliability and Fault Analysis

      Vol:
    E79-A No:6
      Page(s):
    921-927

    A system based on application of Fuzzy Cognitive Map (FCM) to perform on-line fault diagnosis is presented. The diagnostic part of the system is composed of two diagnostic schemes. The first one (basic diagnostic algorithm) can be considered as a simple transition of Shiozaki's signed directed graph approach to FCM framework. The second one is an extended version of the basic diagnostic algorithm where an important concept, the temporal associative memories (TAM) recall of FCM, is adopted. In on-line application, self-generated fault FCM model generates predicted pattern sequence through the TAM recall process, which is compared with observed pattern sequence to declare the origin of fault. As the resultant diagnosis scheme takes short computation time, it can be used for on-line fault diagnosis of large and complex processes, and even for incipient fault diagnosis. In practical case, since real observed pattern sequence may be different from predicted one through the TAM recall owing to propagation delay between process variables, the time indexed fault FCM model incorporating delay time is proposed. The utility of the proposed system is illustrated in fault diagnosis of a tank-pipe system.

  • Hopfield Neural Network Learning Using Direct Gradient Descent of Energy Function

    Zheng TANG  Koichi TASHIMA  Hirofumi HEBISHIMA  Okihiko ISHIZUKA  Koichi TANNO  

     
    LETTER-Neural Networks

      Vol:
    E79-A No:2
      Page(s):
    258-261

    A direct gradient descent learning algorithm of energy function in Hopfield neural networks is proposed. The gradient descent learning is not performed on usual error functions, but the Hopfield energy functions directly. We demonstrate the algorithm by testing it on an analog-to-digital conversion and an associative memory problems.

  • Capacity of Semi-Orthogonally Associative Memory Neural Network Model

    Xin-Min HUANG  Yasumitsu MIYAZAKI  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E79-D No:1
      Page(s):
    72-81

    Semi-Orthogonally Associative Memory neural network model (SAM) uses the orthogonal vectors in Un = {-1, 1}n as its characteristic patterns. It is necessary to select the optimum characteristic parameter n so as to increase the efficiency of this model used. This paper investigates the dynamic behavior and error correcting capability of SAM by statistical neurodynamics, and demonstrates that there exists a convergence criterion in tis recalling processes. And then, making use of these results, its optimum characteristic parameter is deduced. It is proved that, in the statistical sense, its recalling outputs converge to the desired pattern when the initial similar probability is larger than the convergence criterion and not true otherwise. For a SAM with N neurons, when its characteristic parameter is optimum, its memory capacity is N/2 ln ln N, the information storage capacity per connection weight is larger than 9/23 (bits/weight) and the radius of attractive basin of non-spurious stable state is about 0.25N. Computer simulations are done on this model and the simulation results are consistent with the results of theoretical analyses.

  • Authentication Codes Based on Association Schemes

    Youjin SONG  Kaoru KUROSAWA  Shigeo TSUJI  

     
    PAPER-Information Security

      Vol:
    E79-A No:1
      Page(s):
    126-130

    This paper discusses the problem of reducing the number of keys required to authenticate a source state (plaintext), as well as introducing a new way of constructing authentication codes. The construction uses association schemes, which are well-defined schemes in combinatorial design theory. The association scheme on the message (ciphertext) space is established by defining two relations between any two messages: The 1st relation is when the two messages do not share a common key, and the 2nd relation is when they do. Using association schemes of the triangular and group divisible types, we are able to reduce the number of keys.

  • A Synergetic Neural Network

    Masahiro NAKAGAWA  

     
    PAPER-Neural Networks

      Vol:
    E78-A No:3
      Page(s):
    412-423

    In this study we shall put forward a synergetic neural network and investigate the association dynamics. The present neuron model is substantially based on a top down formulation of the dynamic rule of an analog neural network in contrast to the conventional framework. It is proved that a complete association can be assured up to the same number of the embedded patterns as the number of neurons. In practice an association process is carried out for practical images with 256 gray scale levels and 256256 size. In addition, a searching process of the embedded patterns is also realised by means of controlling attraction parameters. Finally a stochastic model for the dynamic process is also proposed as an intermediate model between the association and the searching of the embedded patterns. Finally a stochastic property of the present model is characterized by fractal dimension of the excitation level of a neuron.

  • An Auto-Correlation Associative Memory which Has an Energy Function of Higher Order

    Sadayuki MURASHIMA  Takayasu FUCHIDA  Toshihiro IDA  Takayuki TOYOHIRA  Hiromi MIYAJIMA  

     
    PAPER-Neural Networks

      Vol:
    E78-A No:3
      Page(s):
    424-430

    A noise tolerant auto-correlation associative memory is proposed. An associated energy function is formed by a multiplication of plural Hopfield's energy functions each of which includes single pattern as its energy minimum. An asynchronous optimizing algorithm of the whole energy function is also presented based on the binary neuron model. The advantages of this new associative memory are that the orthogonality relation among patterns does not need to be satisfied and each stored pattern has a large basin of attraction around itself. The computer simulations show a fairly good performance of associative memory for arbitrary pattern vectors which are not orthogonal to each other.

  • A Social Psychological Approach to Networked" Reality

    Ken'ichi IKEDA  

     
    PAPER

      Vol:
    E77-D No:12
      Page(s):
    1390-1396

    In real life, our sence of social reality is supported by the institutional basis, group/interpersonal basis, and belief/schema basis. In networked life, in contrast, these natural and ordinary bases are not always warranted because of a lack of institutional backup, the fragility of the group or interpersonal environment, and the noncommonality of our common sense. In order to compensate for these incomplete bases, networkers ar seeking adaptive communication styles. In this process, there emerge three types of communication cultures. One is the name-card exchange" type. This type is realized by communicating our demographic attributes verbally, which is useful for reality construction of the institutional basis. The second is the ideographization" type. In this type, the content of customary nonverbal communication is creatively transformed into various pseudo nonverbal or para-linguistic expressions, which strengthen fragile interpersonal relationships. The last type is the verbalian" type. This type never depends on the interpersonal or institutional basis. The networked reality is constructed solely in the attempt for common sense development among members. By analyzing the content of messages exchanged in four public groups called Forums," the author found that patterns of communication are transformed in a manner adaptive to each Forum's reality. Thier adaptation modes are different and depend on the types of communication culture every Forum pursues. This is contrarty to the psychologists' tendency to assume that there must be common characteristics or rules valid throughout all of the electronic communication situations.

301-320hit(334hit)