In the most recent processor designs, memory access latency is shortened by adopting a memory hierarchy. In this configuration, the memory consists of a main memory, which comprises dynamic random-access memory (DRAM), and a cache memory, which consists of static random-access memory (SRAM). A cache memory, which is now used in increasingly large volumes, accounts for a vast proportion of the energy consumption of the overall processor. There are two ways to reduce the energy consumption of the cache memory: by decreasing the number of accesses, and by minimizing the energy consumed per access. In this study, we reduce the size of the L1 cache by compressing frequent bit sequences, thus cutting the energy consumed per access. A “frequent bit sequence” is a specific bit pattern that often appears in high-order bits of data retained in the cache memory. Our proposed mechanism, which is based on measurements using a software simulator, cuts energy consumption by 41.0% on average as compared with conventional mechanisms.
Ryotaro KOBAYASHI
Toyohashi University of Technology
Ikumi KANEKO
Toyohashi University of Technology
Hajime SHIMADA
Nagoya University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Ryotaro KOBAYASHI, Ikumi KANEKO, Hajime SHIMADA, "Improvement of Data Utilization Efficiency for Cache Memory by Compressing Frequent Bit Sequences" in IEICE TRANSACTIONS on Electronics,
vol. E99-C, no. 8, pp. 936-946, August 2016, doi: 10.1587/transele.E99.C.936.
Abstract: In the most recent processor designs, memory access latency is shortened by adopting a memory hierarchy. In this configuration, the memory consists of a main memory, which comprises dynamic random-access memory (DRAM), and a cache memory, which consists of static random-access memory (SRAM). A cache memory, which is now used in increasingly large volumes, accounts for a vast proportion of the energy consumption of the overall processor. There are two ways to reduce the energy consumption of the cache memory: by decreasing the number of accesses, and by minimizing the energy consumed per access. In this study, we reduce the size of the L1 cache by compressing frequent bit sequences, thus cutting the energy consumed per access. A “frequent bit sequence” is a specific bit pattern that often appears in high-order bits of data retained in the cache memory. Our proposed mechanism, which is based on measurements using a software simulator, cuts energy consumption by 41.0% on average as compared with conventional mechanisms.
URL: https://global.ieice.org/en_transactions/electronics/10.1587/transele.E99.C.936/_p
Copy
@ARTICLE{e99-c_8_936,
author={Ryotaro KOBAYASHI, Ikumi KANEKO, Hajime SHIMADA, },
journal={IEICE TRANSACTIONS on Electronics},
title={Improvement of Data Utilization Efficiency for Cache Memory by Compressing Frequent Bit Sequences},
year={2016},
volume={E99-C},
number={8},
pages={936-946},
abstract={In the most recent processor designs, memory access latency is shortened by adopting a memory hierarchy. In this configuration, the memory consists of a main memory, which comprises dynamic random-access memory (DRAM), and a cache memory, which consists of static random-access memory (SRAM). A cache memory, which is now used in increasingly large volumes, accounts for a vast proportion of the energy consumption of the overall processor. There are two ways to reduce the energy consumption of the cache memory: by decreasing the number of accesses, and by minimizing the energy consumed per access. In this study, we reduce the size of the L1 cache by compressing frequent bit sequences, thus cutting the energy consumed per access. A “frequent bit sequence” is a specific bit pattern that often appears in high-order bits of data retained in the cache memory. Our proposed mechanism, which is based on measurements using a software simulator, cuts energy consumption by 41.0% on average as compared with conventional mechanisms.},
keywords={},
doi={10.1587/transele.E99.C.936},
ISSN={1745-1353},
month={August},}
Copy
TY - JOUR
TI - Improvement of Data Utilization Efficiency for Cache Memory by Compressing Frequent Bit Sequences
T2 - IEICE TRANSACTIONS on Electronics
SP - 936
EP - 946
AU - Ryotaro KOBAYASHI
AU - Ikumi KANEKO
AU - Hajime SHIMADA
PY - 2016
DO - 10.1587/transele.E99.C.936
JO - IEICE TRANSACTIONS on Electronics
SN - 1745-1353
VL - E99-C
IS - 8
JA - IEICE TRANSACTIONS on Electronics
Y1 - August 2016
AB - In the most recent processor designs, memory access latency is shortened by adopting a memory hierarchy. In this configuration, the memory consists of a main memory, which comprises dynamic random-access memory (DRAM), and a cache memory, which consists of static random-access memory (SRAM). A cache memory, which is now used in increasingly large volumes, accounts for a vast proportion of the energy consumption of the overall processor. There are two ways to reduce the energy consumption of the cache memory: by decreasing the number of accesses, and by minimizing the energy consumed per access. In this study, we reduce the size of the L1 cache by compressing frequent bit sequences, thus cutting the energy consumed per access. A “frequent bit sequence” is a specific bit pattern that often appears in high-order bits of data retained in the cache memory. Our proposed mechanism, which is based on measurements using a software simulator, cuts energy consumption by 41.0% on average as compared with conventional mechanisms.
ER -