The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] synapse(8hit)

1-8hit
  • Quadruped Locomotion Patterns Generated by Desymmetrization of Symmetric Central Pattern Generator Hardware Network

    Naruki SASAGAWA  Kentaro TANI  Takashi IMAMURA  Yoshinobu MAEDA  

     
    PAPER-Nonlinear Problems

      Vol:
    E101-A No:10
      Page(s):
    1658-1667

    Reproducing quadruped locomotion from an engineering viewpoint is important not only to control robot locomotion but also to clarify the nonlinear mechanism for switching between locomotion patterns. In this paper, we reproduced a quadruped locomotion pattern, gallop, using a central pattern generator (CPG) hardware network based on the abelian group Z4×Z2, originally proposed by Golubitsky et al. We have already used the network to generate three locomotion patterns, walk, trot, and bound, by controlling the voltage, EMLR, inputted to all CPGs which acts as a signal from the midbrain locomotor region (MLR). In order to generate the gallop and canter patterns, we first analyzed the network symmetry using group theory. Based on the results of the group theory analysis, we desymmetrized the contralateral couplings of the CPG network using a new parameter in addition to EMLR, because, whereas the walk, trot, and bound patterns were able to be generated from the spatio-temporal symmetry of the product group Z4×Z2, the gallop and canter patterns were not. As a result, using a constant element $hat{kappa}$ on Z2, the gallop and canter locomotion patterns were generated by the network on ${f Z}_4+hat{kappa}{f Z}_4$, and actually in this paper, the gallop locomotion pattern was generated on the actual circuit.

  • Three Gait Oscillations Switchable by a Single Parameter on Hard-Wired Central Pattern Generator Hardware Network

    Akihiro MARUYAMA  Kentaro TANI  Shigehito TANAHASHI  Atsuhiko IIJIMA  Yoshinobu MAEDA  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E99-A No:8
      Page(s):
    1600-1608

    We present a hard-wired central patter generator (CPG) hardware network that reproduces the periodic oscillations of the typical gaits, namely, walk, trot, and bound. Notably, the three gaits are generated by a single parameter, i.e., the battery voltage EMLR, which acts like a signal from the midbrain's locomotor region. One CPG is composed of two types of hardware neuron models, reproducing neuronal bursting and beating (action potentials), and three types of hardware synapse models: a gap junction, excitatory and inhibitory synapses. When four hardware CPG models were coupled into a Z4 symmetry network in a previous study [22], two neuronal oscillation patterns corresponding to four-legged animal gaits (walk and bound) were generated by manipulating a single control parameter. However, no more than two neuronal oscillation patterns have been stably observed on a hard-wired four-CPG hardware network. In the current study, we indicate that three neuronal oscillation patterns (walk, trot, and bound) can be generated by manipulating a single control parameter on a hard-wired eight-CPG (Z4 × Z2 symmetry) hardware network.

  • Search for Minimal and Semi-Minimal Rule Sets in Incremental Learning of Context-Free and Definite Clause Grammars

    Keita IMADA  Katsuhiko NAKAMURA  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E93-D No:5
      Page(s):
    1197-1204

    This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called "bridging" based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.

  • Analog CMOS Implementation of Quantized Interconnection Neural Networks for Memorizing Limit Cycles

    Cheol-Young PARK  Koji NAKAJIMA  

     
    PAPER

      Vol:
    E82-A No:6
      Page(s):
    952-957

    In order to investigate the dynamic behavior of quantized interconnection neural networks on neuro-chips, we have designed and fabricated hardware neural networks according to design rule of a 1.2 µm CMOS technology. To this end, we have developed programmable synaptic weights for the interconnection with three values of 1 and 0. We have tested the chip and verified the dynamic behavior of the networks in a circuit level. As a result of our study, we can provide the most straightforward application of networks for a dynamic pattern classifier. The proposed network is advantageous in that it does not need extra exemplar to classify shifted or reversed patterns.

  • Improving the Hopfield Model for TSP Feasible Solutions by Synapse Dynamical Systems

    Yoshikane TAKAHASHI  

     
    PAPER-Neural Networks

      Vol:
    E79-A No:5
      Page(s):
    694-708

    It is well known that the Hopfield Model (HM) for neural networks to solve the TSP suffers from three major drawbacks: (D1) it can converge to non-optimal local minimum solutions; (D2) it can also converge to non-feasible solutions; (D3) results are very sensitive to the careful tuning of its parameters. A number of methods have been proposed to overcome (D1) well. In contrast, work on (D2) and (D3) has not been sufficient; techniques have not been generalized to larger classes of optimization problems with constraint including the TSP. We first construct Extended HMs (E-HMs) that overcome both (D2) and (D3). The extension of the E-HM lies in the addition of a synapse dynamical system cooperated with the corrent HM unit dynamical system. It is this synapse dynamical system that makes the TSP constraint hold at any final states for whatever choices of the HM parameters and an initial state. We then generalize the E-HM further into a network that can solve a larger class of continuous optimization problems with a constraint equation where both of the objective function and the constraint function are non-negative and continuously differentiable.

  • LSI Neural Chip of Pulse-Output Network with Programmable Synapse

    Shigeo SATO  Manabu YUMINE  Takayuki YAMA  Junichi MUROTA  Koji NAKAJIMA  Yasuji SAWADA  

     
    PAPER-Integrated Electronics

      Vol:
    E78-C No:1
      Page(s):
    94-100

    We have fabricated a microchip of a neural circuit with pulse representation. The neuron output is a voltage pulse train. The synapse is a constant current source whose output is proportional to the duty ratio of neuron output. Membrane potential is charged by collection of synaptic currents through a RC circuit, providing an analog operation similar to the biological neural system. We use a 4-bit SRAM as the memory for synaptic weights. The expected I/O characteristics of the neurons and the synapses were measured experimentally. We have also demonstrated the capability of network operation with the use of synaptic weights, for solving the A/D conversion problem.

  • The Advantages of a DRAM-Based Digital Architecture for Low-Power, Large-Scale Neuro-Chips

    Takao WATANABE  Masakazu AOKI  Katsutaka KIMURA  Takeshi SAKATA  Kiyoo ITOH  

     
    PAPER-Neural Networks and Chips

      Vol:
    E76-C No:7
      Page(s):
    1206-1214

    The advantages of a neuro-chip architecture based on a DRAM are demonstrated through a discussion of the general issuse regarding a memory based neuro-chip architecture and a comparison with a chip based on an SRAM. The performance of both chips is compared assuming digital operation, a 1.5-V supply voltage, a 106-synapse neural network capability, and a 0.5-µm CMOS design rule. The use of a one-transistor DRAM cell array for the storage of synapse weights results in a chip 55% smaller than an SRAM based chip with the same 8-Mbit memory capacity and the same number of processing elements. No additional operations for refreshing the DRAM cell array are necessary during the processing of the neural networks. This is because all the synapse weights in the array are transferred to the processing elements during the processing and the DRAM cells in the array are automatically refreshed when they are selected. The precharge operation of the DRAM cell array degrades the processing speed, however a processing speed of 1.37 GCPS is expected for the DRAM based chip. That speed is comparable to the 1.71 GCPS for the SRAM based chip with the same 256 parallel-processing elements. A DRAM cell array has the additional advantage of lower power dissipation in this specific usage for the neuro-chip. The dynamic operation of the DRAM cell array results in a 10% lower operating power dissipation than a chip using an SRAM cell array at the same processing speed of 1.37 GCPS. That lower operating power dissipation enables a DRAM based chip to run on a 1.5-V dry cell for longer under intermittent daily use even though the SRAM cell array has little power dissipation in data-holding mode.

  • Analog VLSI Implementation of Adaptive Algorithms by an Extended Hebbian Synapse Circuit

    Takashi MORIE  Osamu FUJITA  Yoshihito AMEMIYA  

     
    PAPER

      Vol:
    E75-C No:3
      Page(s):
    303-311

    First, a number of issues pertaining to analog VLSI implementation of Backpropagation (BP) and Deterministic Boltzmann Machine (DBM) learning algorithms are clarified. According to the results from software simulation, a mismatch between the activation function and derivative generated by independent circuits degrades the BP learning performance. The perfomance can be improved, however, by adjusting the gain of the activation function used to obtain the derivative, irrespective of the original activation function. Calculation errors embedded in the circuits also degrade the learning preformance. BP learning is sensitive to offset errors in multiplication in the learning process, and DBM learning is sensitive to asymmetry between the weight increment and decrement processes. Next, an analog VLSI architecture for implementing the algorithms using common building block circuits is proposed. The evaluation results of test chips confirm that synaptic weights can be updated up to 1 MHz and that a resolution exceeding 14 bits can be attained. The test chips successfully perform XOR learning using each algorithm.