The search functionality is under construction.

Keyword Search Result

[Keyword] SoC(334hit)

321-334hit(334hit)

  • A Flexible Search Managing Circuitry for High-Density Dynamic CAMs

    Takeshi HAMAMOTO  Tadato YAMAGATA  Masaaki MIHARA  Yasumitsu MURAI  Toshifumi KOBAYASHI  Hideyuki OZAKI  

     
    PAPER-General Technology

      Vol:
    E77-C No:8
      Page(s):
    1377-1384

    New circuit techniques were proposed to realize a high-density and high-performance content addressable memory (CAM). A dynamic register which functions as a status flag, and some logic circuits are organically combined and flexibly perform complex search operations, despite the compact layout area. Any kind of logic operations for the search results, that are AND, OR, INVERT, and the combinations of them, can be implemented in every word simultaneously. These circuits are implemented in an experimental 288 kbit dynamic CAM using 0.8 µm CMOS process technology. We consider these techniques to be indispensable for high-density and high-performance dynamic CAM.

  • Peformance Formulation and Evaluation of Associative Memory Extended to Higher Order

    Yukio KUMAGAI  Joarder KAMRUZZAMAN  Hiromitsu HIKITA  

     
    LETTER-Neural Networks

      Vol:
    E77-A No:4
      Page(s):
    736-741

    In this letter, we present a distinct alternative of cross talk formulation of associative memory based on the outer product algorithm extended to the higher order and a performance evaluation in terms of the probability of exact data recall by using this formulation. The significant feature of these formulations is that both cross talk and the probability formulated are explicitly represented as the functional forms of Hamming distance between the memorized keys and the applied input key, and the degree of higher order correlation. Simulation results show that exact data retrieval ability of the associative memory using randomly generated data and keys is in well agreement with our theoretical estimation.

  • Quick Learning for Bidirectional Associative Memory

    Motonobu HATTORI  Masafumi HAGIWARA  Masao NAKAGAWA  

     
    PAPER-Learning

      Vol:
    E77-D No:4
      Page(s):
    385-392

    Recently, many researches on associative memories have been made a lot of neural network models have been proposed. Bidirectional Associative Memory (BAM) is one of them. The BAM uses Hebbian learning. However, unless the traning vectors are orthogonal, Hebbian learning does not guarantee the recall of all training pairs. Namely, the BAM which is trained by Hebbian learning suffers from low memory capacity. To improve the storage capacity of the BAM, Pseudo-Relaxation Learning Algorithm for BAM (PRLAB) has been proposed. However, PRLAB needs long learning epochs because of random initial weights. In this paper, we propose Quick Learning for BAM which greatly reduces learning epochs and guarantees the recall of all training pairs. In the proposed algorithm, the BAM is trained by Hebbian learning in the first stage and then trained by PRLAB. Owing to the use of Hebbian learning in the first stage, the weights are much closer to the solution space than the initial weights chosen randomly. As a result, the proposed algorithm can reduce the learning epocks. The features of the proposed algorithm are: 1) It requires much less learning epochs. 2) It guarantees the recall of all training pairs. 3) It is robust for noisy inputs. 4) The memory capacity is much larger than conventional BAM. In addition, we made clear several important chracteristics of the conventional and the proposed algorithms such as noise reduction characteristics, storage capacity and the finding of an index which relates to the noise reduction.

  • Iterative Middle Mapping Learning Algorithm for Cellular Neural Networks

    Chen HE  Akio USHIDA  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:4
      Page(s):
    706-715

    In this paper, a middle-mapping learning algorithm for cellular associative memories is presented. This algorithm makes full use of the properties of the cellular neural network so that the associative memory has some advantages compared with the memory designed by the ourter product method. It can guarantee each prototype is stored at an equilibrium point. In the practical implementation, it is easy to build up the circuit because the weight matrix presenting the connection between cells is not symmetric. The synchronous updating rule makes its associative speed very fast compared to the Hopfield associative memory.

  • Optical Associative Memory Using Optoelectronic Neurochips for Image Processing

    Masaya OITA  Yoshikazu NITTA  Shuichi TAI  Kazuo KYUMA  

     
    PAPER

      Vol:
    E77-C No:1
      Page(s):
    56-62

    This paper presents a novel model of optical associative memory using an optoelectronic neurochips, which detects and processes a two-dimensional input image at the same time. The original point of this model is that the optoelectronic neurochips allow direct image processing in terms of parallel input/output interface and parallel neural processing. The operation principle is based on the nonlinear transformation of the input image to the corresponding the point attractor of a fully connected neural network. The learning algorithm is the simulated annealing and the energy of the network state is used as its cost function. The computer simulations show its usefulness and that the maximum number of stored images is 150 in the network with 64 neurons. Moreover, we experimentally demonstrate an optical implementation of the model using the optoelectronic neurochip. The chip consists of two-dimensional array of variable sensitivity photodetectors with 8 16 elements. The experimental results shows that 3 images of size 8 8 were successfully stored in the system. In the case of the input image of size 64 64, the estimated processing speed is 100 times higher than that of the conventional optoelectronic neurochips.

  • An Autocorrelation Associative Neural Network with Self-Feedbacks

    Hiroshi UEDA  Masaya OHTA  Akio OGIHARA  Kunio FUKUNAGA  

     
    LETTER

      Vol:
    E76-A No:12
      Page(s):
    2072-2075

    In this article, the autocorrelation associative neural network that is one of well-known applications of neural networks is improved to extend its capacity and error correcting ability. Our approach of the improvement is based on the consideration that negative self-feedbacks remove spurious states. Therefore, we propose a method to determine the self-feedbacks as small as possible within the range that all stored patterns are stable. A state transition rule that enables to escape oscillation is also presented because the method has a possibility of falling into oscillation. The efficiency of the method is confirmed by means of some computer simulations.

  • The Trend of Functional Memory Development

    Keikichi TAMARU  

     
    INVITED PAPER

      Vol:
    E76-C No:11
      Page(s):
    1545-1554

    The concept of functional memory was proposed over nearly four decades ago. However, the actually usable products have not appeared until the 1980s instead of the long history of development. Functional memory is classified into three categories; there are a general functional memory, a processing element array with small size memory and a special purpose memory. Today a majority of functional memory is an associative memory or a content addressable memory (CAM) and a special purpose memory based on CAM. Due to advances in fablication capability,the capacity of CAM LSI has increased over 100 K bits. A general purpose CAM was developed based on SRAM cell and DRAM cell, respectively. The typical CAM LSI of both types, 20 K bits SRAM based CAM and 288 K bits DRAM based CAM, are introduced. DRAM based CAM is attractive for the large capacity. A parallel processor architecture based on CAM cell is proposed which is called a Functional Memory Type Parallel Processor (FMPP). The basic feature is a dual character of a higher performance CAM and a tiny processor array. It can perform a highly parallel operation to the stored data.

  • A Bitline Control Circuit Scheme and Redundancy Technique for High-Density Dynamic Content Addressable Memories

    Tadato YAMAGATA  Masaaki MIHARA  Takeshi HAMAMOTO  Yasumitsu MURAI  Toshifumi KOBAYASHI  Michihiro YAMADA  Hideyuki OZAKI  

     
    PAPER-Application Specific Memory

      Vol:
    E76-C No:11
      Page(s):
    1657-1664

    This paper describes a bitline control circuit and redundancy technique for high-density dynamic content addressable memories (CAMs). The proposed bitline control circuit can efficiently manage a dynamic CAM cell accompanied by complex operations; that is, a refresh operation, a masked search operation, and partial writing, in addition to normal read/write/search operations. By adding a small supplementary circuit to the bitline control circuit, a circuit scheme with redundancy which prevents disabled column circuits from affecting a match operation can also be obtained. These circuit technologies achieve higher-density dynamic CAMs than conventional static CAMs. These technologies have been successfully applied to a 288-kbit CAM with a typical cycle time of 150 ns.

  • Hybrid Neural Networks as a Tool for the Compressor Diagnosis

    Manabu KOTANI  Haruya MATSUMOTO  Toshihide KANAGAWA  

     
    PAPER-Speech Processing

      Vol:
    E76-D No:8
      Page(s):
    882-889

    An attempt to apply neural networks to the acoustic diagnosis for the reciprocating compressor is described. The proposed neural network, Hybrid Neural Network (HNN), is composed of two multi-layered neural networks, an Acoustic Feature Extraction Network (AFEN) and a Fault Discrimination Network (FDN). The AFEN has multi-layers and the number of units in the middle hidden layer is smaller than the others. The input patterns of the AFEN are the logarithmic power spectra. In the AFEN, the error back propagation method is applied as the learning algorithm and the target patterns for the output layer are the same as the input patterns. After the learning, the hidden layer acquires the compressed input information. The architecture of the AFEN appropriate for the acoustic diagnosis is examined. This includes the determination of the form of the activation function in the output layer, the number of hidden layers and the numbers of units in the hidden layers. The FDN is composed of three layers and the learning algorithm is the same as the AFEN. The appropriate number of units in the hidden layer of the FDN is examined. The input patterns of the FDN are fed from the output of the hidden layer in the learned AFEN. The task of the HNN is to discriminate the types of faults in the compressor's two elements, the valve plate and the valve spring. The performance of the FDN are compared between the different inputs; the output of the hidden layer in the AFEN, the conventional cepstral coefficients and the filterbank's outputs. Furthermore, the FDN itself is compared to the conventional pattern recognition technique based on the feature vector distance, the Euclid distance measure, where the input is taken from the AFEN. The obtained results show that the discrimination accuracy with the HNN is better than that with the other combination of the discrimination method and its input. The output criteria of network for practical use is also discussed. The discrimination accuracy with this criteria is 85.4% and there is no case which mistakes the fault condition for the normal condition. These results suggest that the proposed decision network is effective for the acoustic diagnosis.

  • Abrupt Variations of Attractors Caused by Argumental Discreteness in Non-Hermitian Associative Memories

    Akira HIROSE  

     
    LETTER-Neural Nets--Theory and Applications--

      Vol:
    E76-A No:5
      Page(s):
    777-779

    Abrupt variations of attractors caused by argumental discreteness in non-Hermitian complex-valued neural networks are reported. When we apply the complex-valued associative memories to dynamical processing, the weighting matrices are constructed as non-Hermitian in general so that they have motive force to the signal vectors. It is observed that competitions between argumental rotation force and noise-suppression ability of associative memories lead to trajectory distortions and abrupt variations of the attractors.

  • Associative Neural Network Models Based on a Measure of Manhattan Length

    Hiroshi UEDA  Yoichiro ANZAI  Masaya OHTA  Shojiro YONEDA  Akio OGIHARA  

     
    PAPER

      Vol:
    E76-A No:3
      Page(s):
    277-283

    In this paper, two models for associative memory based on a measure of manhattan length are proposed. First, we propose the two-layered model which has an advantage to its implementation by using PDN. We also refer to the way to improve the recalling ability of this model against noisy input patterns. Secondly, we propose the other model which always recalls the nearest memory pattern in a measure of manhattan length by lateral inhibition. Even if a noise of input pattern is so large that the first model can not recall, this model can recall correctly against such a noisy pattern. We also confirm the performance of the two models by computer simulations.

  • The Capacity of Sparsely Encoded Associative Memories

    Mehdi N. SHIRAZI  

     
    PAPER-Bio-Cybernetics

      Vol:
    E76-D No:3
      Page(s):
    360-367

    We consider an asymptotically sparsely encoded associative memory. Patterns are encoded by n-dimensional vectors of 1 and 1 generated randomly by a sequence of biased Bernoulli trials and stored in the network according to Hebbian rule. Using a heuristic argument we derive the following capacities:c(n)ne/4k log n'C(n)ne/4k(1e)log n'where, 0e1 controls the degree of sparsity of the encoding scheme and k is a constant. Here c(n) is the capacity of the network such that any stored pattern is a fixed point with high probability, whereas C(n) is the capacity of the network such that all stored patterns are fixed points with high probability. The main contribution of this technical paper is a theoretical verification of the above results using the Poisson limit theorems of exchangeable events.

  • Guaranteed Storing of Limit Cycles into a Discrete-Time Asynchronous Neural Network

    Kenji NOWARA  Toshimichi SAITO  

     
    PAPER-Neural Networks

      Vol:
    E75-A No:11
      Page(s):
    1579-1582

    This article discusses a synthesis procedure of a discrete-time asynchronous neural network whose information is a limit cycle. The synthesis procedure uses a novel connection matrix and can be reduced into a linear epuation. If all elements of desired limit cycles are independent at each transition step, the equation can be solved and all desired limit cycles can be stored. In some experiments, our procedure exhibits much better storing performance than previous ones.

  • Graph-Theoretical Construction of Uniquely Decodable Code Pair for the Two-User Binary Adder Channel

    Feng GUO  Yoichiro WATANABE  

     
    PAPER

      Vol:
    E75-A No:4
      Page(s):
    492-497

    It is known that the uniquely decodable code pairs (C1, C2) for the two-user binary adder channel relates to the maximum independent set of a graph associated with a binary code. This paper formulates the independence number of a class of graphs associated with binary linear codes, and presents an algorithm of the maximum independent set for those graphs. Uniquely decodable code pairs (C1, C2)'s are produced, where C1 is a linear code and C2 is a maximum independent set of the graph associated with C1. For the given C1, the transmission rate of C2 is higher than that by Khachatrian, which is known as the best result as so far. This is not rather surprising because the code C2 is a maximum independent set in this paper but not be Khachatrian's.

321-334hit(334hit)