The search functionality is under construction.
The search functionality is under construction.

A Sparse Memory Access Architecture for Digital Neural Network LSIs

Kimihisa AIHARA, Osamu FUJITA, Kuniharu UCHIMURA

  • Full Text Views

    0

  • Cite this

Summary :

A sparse memory access architecture which is proposed to achieve a high-computational-speed neural-network LSI is described in detail. This architecture uses two key techniques, compressible synapse-weight neuron calculation and differential neuron operation, to reduce the number of accesses to synapse weight memories and the number of neuron calculations without incurring an accuracy penalty. The test chip based on this architecture has 96 parallel data-driven processing units and enough memory for 12,288 synapse weights. In a pattern recognition example, the number of memory accesses and neuron calculations was reduced to 0.87% that needed in the conventional method and the practical performance was 18 GCPS. The sparse memory access architecture is also effective when the synapse weights are stored in off-chip memory.

Publication
IEICE TRANSACTIONS on Electronics Vol.E80-C No.7 pp.996-1002
Publication Date
1997/07/25
Publicized
Online ISSN
DOI
Type of Manuscript
Special Section PAPER (Special Issue on New Concept Device and Novel Architecture LSIs)
Category
Neural Networks and Chips

Authors

Keyword