This document describes a proposal for the implementation of a new VLSI neural network technique called Parallel Propagated Targets (PPT). This technique differs from existing techniques because all layer, within a given network, can learn simultaneously and not sequentially as with the Back Propagation algorithm. the Parallel Propagated Target algorithm uses only information local to each layer and therefore there is no backward flow of information within the network. This allows a simplification in the system design and a reduction in the complexity of implementation, as well as acheiving greater efficiency in terms of computation. Since all synapses can be calculated simultaneously it is possible using the PPT neural algorithm, to parallelly compute all layers of a multi-layered network for the first time.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Anthony V. W. SMITH, Hiroshi SAKO, "A Hardware Implementation of a Neural Network Using the Parallel Propagated Targets Algorithm" in IEICE TRANSACTIONS on Information,
vol. E77-D, no. 4, pp. 516-527, April 1994, doi: .
Abstract: This document describes a proposal for the implementation of a new VLSI neural network technique called Parallel Propagated Targets (PPT). This technique differs from existing techniques because all layer, within a given network, can learn simultaneously and not sequentially as with the Back Propagation algorithm. the Parallel Propagated Target algorithm uses only information local to each layer and therefore there is no backward flow of information within the network. This allows a simplification in the system design and a reduction in the complexity of implementation, as well as acheiving greater efficiency in terms of computation. Since all synapses can be calculated simultaneously it is possible using the PPT neural algorithm, to parallelly compute all layers of a multi-layered network for the first time.
URL: https://global.ieice.org/en_transactions/information/10.1587/e77-d_4_516/_p
Copy
@ARTICLE{e77-d_4_516,
author={Anthony V. W. SMITH, Hiroshi SAKO, },
journal={IEICE TRANSACTIONS on Information},
title={A Hardware Implementation of a Neural Network Using the Parallel Propagated Targets Algorithm},
year={1994},
volume={E77-D},
number={4},
pages={516-527},
abstract={This document describes a proposal for the implementation of a new VLSI neural network technique called Parallel Propagated Targets (PPT). This technique differs from existing techniques because all layer, within a given network, can learn simultaneously and not sequentially as with the Back Propagation algorithm. the Parallel Propagated Target algorithm uses only information local to each layer and therefore there is no backward flow of information within the network. This allows a simplification in the system design and a reduction in the complexity of implementation, as well as acheiving greater efficiency in terms of computation. Since all synapses can be calculated simultaneously it is possible using the PPT neural algorithm, to parallelly compute all layers of a multi-layered network for the first time.},
keywords={},
doi={},
ISSN={},
month={April},}
Copy
TY - JOUR
TI - A Hardware Implementation of a Neural Network Using the Parallel Propagated Targets Algorithm
T2 - IEICE TRANSACTIONS on Information
SP - 516
EP - 527
AU - Anthony V. W. SMITH
AU - Hiroshi SAKO
PY - 1994
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E77-D
IS - 4
JA - IEICE TRANSACTIONS on Information
Y1 - April 1994
AB - This document describes a proposal for the implementation of a new VLSI neural network technique called Parallel Propagated Targets (PPT). This technique differs from existing techniques because all layer, within a given network, can learn simultaneously and not sequentially as with the Back Propagation algorithm. the Parallel Propagated Target algorithm uses only information local to each layer and therefore there is no backward flow of information within the network. This allows a simplification in the system design and a reduction in the complexity of implementation, as well as acheiving greater efficiency in terms of computation. Since all synapses can be calculated simultaneously it is possible using the PPT neural algorithm, to parallelly compute all layers of a multi-layered network for the first time.
ER -