High speed simulation of neural networks can be achieved through parallel implementations capable of exploiting their massive inherent parallelism. In this paper, we show how this inherent parallelism can be effectively exploited on parallel data-driven systems. By using these systems, the asynchronous parallelism of neural networks can be naturally specified by the functional data-driven programs, and maximally exploited by pipelined and scalable data-driven processors. We shall demonstrate the suitability of data-driven systems for the parallel simulation of neural networks through a parallel implementation of the widely used back propagation networks. The implementation is based on the exploitation of the network and training set parallelisms inherent in these networks, and is evaluated using an image data compression network.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Ali M. ALHAJ, Hiroaki TERADA, "Exploiting Parallelism in Neural Networks on a Dynamic Data-Driven System" in IEICE TRANSACTIONS on Fundamentals,
vol. E76-A, no. 10, pp. 1804-1811, October 1993, doi: .
Abstract: High speed simulation of neural networks can be achieved through parallel implementations capable of exploiting their massive inherent parallelism. In this paper, we show how this inherent parallelism can be effectively exploited on parallel data-driven systems. By using these systems, the asynchronous parallelism of neural networks can be naturally specified by the functional data-driven programs, and maximally exploited by pipelined and scalable data-driven processors. We shall demonstrate the suitability of data-driven systems for the parallel simulation of neural networks through a parallel implementation of the widely used back propagation networks. The implementation is based on the exploitation of the network and training set parallelisms inherent in these networks, and is evaluated using an image data compression network.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/e76-a_10_1804/_p
Copy
@ARTICLE{e76-a_10_1804,
author={Ali M. ALHAJ, Hiroaki TERADA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Exploiting Parallelism in Neural Networks on a Dynamic Data-Driven System},
year={1993},
volume={E76-A},
number={10},
pages={1804-1811},
abstract={High speed simulation of neural networks can be achieved through parallel implementations capable of exploiting their massive inherent parallelism. In this paper, we show how this inherent parallelism can be effectively exploited on parallel data-driven systems. By using these systems, the asynchronous parallelism of neural networks can be naturally specified by the functional data-driven programs, and maximally exploited by pipelined and scalable data-driven processors. We shall demonstrate the suitability of data-driven systems for the parallel simulation of neural networks through a parallel implementation of the widely used back propagation networks. The implementation is based on the exploitation of the network and training set parallelisms inherent in these networks, and is evaluated using an image data compression network.},
keywords={},
doi={},
ISSN={},
month={October},}
Copy
TY - JOUR
TI - Exploiting Parallelism in Neural Networks on a Dynamic Data-Driven System
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1804
EP - 1811
AU - Ali M. ALHAJ
AU - Hiroaki TERADA
PY - 1993
DO -
JO - IEICE TRANSACTIONS on Fundamentals
SN -
VL - E76-A
IS - 10
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - October 1993
AB - High speed simulation of neural networks can be achieved through parallel implementations capable of exploiting their massive inherent parallelism. In this paper, we show how this inherent parallelism can be effectively exploited on parallel data-driven systems. By using these systems, the asynchronous parallelism of neural networks can be naturally specified by the functional data-driven programs, and maximally exploited by pipelined and scalable data-driven processors. We shall demonstrate the suitability of data-driven systems for the parallel simulation of neural networks through a parallel implementation of the widely used back propagation networks. The implementation is based on the exploitation of the network and training set parallelisms inherent in these networks, and is evaluated using an image data compression network.
ER -