Parallel memories increase memory bandwidth with several memory modules working in parallel and can be used to feed a processor with only necessary data. The Configurable Parallel Memory Architecture (CPMA) enables a multitude of access formats and module assignment functions to be used within a single hardware implementation, which has not been possible in prior embedded parallel memory systems. This paper focuses on address computation in CPMA, which is implemented using several configurable computation units in parallel. One unit is dedicated for each type of access formats and module assignment functions that the implementation supports. Timing and area estimates are given for a 0.25-micron CMOS process. The utilized resources are shown to be linearly proportional to the number of memory modules.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Eero AHO, Jarno VANNE, Kimmo KUUSILINNA, Timo D. HAMALAINEN, "Address Computation in Configurable Parallel Memory Architecture" in IEICE TRANSACTIONS on Information,
vol. E87-D, no. 7, pp. 1674-1681, July 2004, doi: .
Abstract: Parallel memories increase memory bandwidth with several memory modules working in parallel and can be used to feed a processor with only necessary data. The Configurable Parallel Memory Architecture (CPMA) enables a multitude of access formats and module assignment functions to be used within a single hardware implementation, which has not been possible in prior embedded parallel memory systems. This paper focuses on address computation in CPMA, which is implemented using several configurable computation units in parallel. One unit is dedicated for each type of access formats and module assignment functions that the implementation supports. Timing and area estimates are given for a 0.25-micron CMOS process. The utilized resources are shown to be linearly proportional to the number of memory modules.
URL: https://global.ieice.org/en_transactions/information/10.1587/e87-d_7_1674/_p
Copy
@ARTICLE{e87-d_7_1674,
author={Eero AHO, Jarno VANNE, Kimmo KUUSILINNA, Timo D. HAMALAINEN, },
journal={IEICE TRANSACTIONS on Information},
title={Address Computation in Configurable Parallel Memory Architecture},
year={2004},
volume={E87-D},
number={7},
pages={1674-1681},
abstract={Parallel memories increase memory bandwidth with several memory modules working in parallel and can be used to feed a processor with only necessary data. The Configurable Parallel Memory Architecture (CPMA) enables a multitude of access formats and module assignment functions to be used within a single hardware implementation, which has not been possible in prior embedded parallel memory systems. This paper focuses on address computation in CPMA, which is implemented using several configurable computation units in parallel. One unit is dedicated for each type of access formats and module assignment functions that the implementation supports. Timing and area estimates are given for a 0.25-micron CMOS process. The utilized resources are shown to be linearly proportional to the number of memory modules.},
keywords={},
doi={},
ISSN={},
month={July},}
Copy
TY - JOUR
TI - Address Computation in Configurable Parallel Memory Architecture
T2 - IEICE TRANSACTIONS on Information
SP - 1674
EP - 1681
AU - Eero AHO
AU - Jarno VANNE
AU - Kimmo KUUSILINNA
AU - Timo D. HAMALAINEN
PY - 2004
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E87-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2004
AB - Parallel memories increase memory bandwidth with several memory modules working in parallel and can be used to feed a processor with only necessary data. The Configurable Parallel Memory Architecture (CPMA) enables a multitude of access formats and module assignment functions to be used within a single hardware implementation, which has not been possible in prior embedded parallel memory systems. This paper focuses on address computation in CPMA, which is implemented using several configurable computation units in parallel. One unit is dedicated for each type of access formats and module assignment functions that the implementation supports. Timing and area estimates are given for a 0.25-micron CMOS process. The utilized resources are shown to be linearly proportional to the number of memory modules.
ER -