Low power consumption is important in edge artificial intelligence (AI) chips, where power supply is limited. Therefore, we propose reconfigurable neural network accelerator (ReNA), an AI chip that can process both a convolutional layer and fully connected layer with the same structure by reconfiguring the circuit. In addition, we developed tools for pre-evaluation of the performance when a deep neural network (DNN) model is implemented on ReNA. With this approach, we established the flow for the implementation of DNN models on ReNA and evaluated its power consumption. ReNA achieved 1.51TOPS/W in the convolutional layer and 1.38TOPS/W overall in a VGG16 model with a 70% pruning rate.
Yasuhiro NAKAHARA
Kumamoto University
Masato KIYAMA
Kumamoto University
Motoki AMAGASAKI
Kumamoto University
Qian ZHAO
Kyushu Institute of Technology
Masahiro IIDA
Kumamoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Yasuhiro NAKAHARA, Masato KIYAMA, Motoki AMAGASAKI, Qian ZHAO, Masahiro IIDA, "Reconfigurable Neural Network Accelerator and Simulator for Model Implementation" in IEICE TRANSACTIONS on Fundamentals,
vol. E105-A, no. 3, pp. 448-458, March 2022, doi: 10.1587/transfun.2021VLP0012.
Abstract: Low power consumption is important in edge artificial intelligence (AI) chips, where power supply is limited. Therefore, we propose reconfigurable neural network accelerator (ReNA), an AI chip that can process both a convolutional layer and fully connected layer with the same structure by reconfiguring the circuit. In addition, we developed tools for pre-evaluation of the performance when a deep neural network (DNN) model is implemented on ReNA. With this approach, we established the flow for the implementation of DNN models on ReNA and evaluated its power consumption. ReNA achieved 1.51TOPS/W in the convolutional layer and 1.38TOPS/W overall in a VGG16 model with a 70% pruning rate.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2021VLP0012/_p
Copy
@ARTICLE{e105-a_3_448,
author={Yasuhiro NAKAHARA, Masato KIYAMA, Motoki AMAGASAKI, Qian ZHAO, Masahiro IIDA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Reconfigurable Neural Network Accelerator and Simulator for Model Implementation},
year={2022},
volume={E105-A},
number={3},
pages={448-458},
abstract={Low power consumption is important in edge artificial intelligence (AI) chips, where power supply is limited. Therefore, we propose reconfigurable neural network accelerator (ReNA), an AI chip that can process both a convolutional layer and fully connected layer with the same structure by reconfiguring the circuit. In addition, we developed tools for pre-evaluation of the performance when a deep neural network (DNN) model is implemented on ReNA. With this approach, we established the flow for the implementation of DNN models on ReNA and evaluated its power consumption. ReNA achieved 1.51TOPS/W in the convolutional layer and 1.38TOPS/W overall in a VGG16 model with a 70% pruning rate.},
keywords={},
doi={10.1587/transfun.2021VLP0012},
ISSN={1745-1337},
month={March},}
Copy
TY - JOUR
TI - Reconfigurable Neural Network Accelerator and Simulator for Model Implementation
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 448
EP - 458
AU - Yasuhiro NAKAHARA
AU - Masato KIYAMA
AU - Motoki AMAGASAKI
AU - Qian ZHAO
AU - Masahiro IIDA
PY - 2022
DO - 10.1587/transfun.2021VLP0012
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E105-A
IS - 3
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - March 2022
AB - Low power consumption is important in edge artificial intelligence (AI) chips, where power supply is limited. Therefore, we propose reconfigurable neural network accelerator (ReNA), an AI chip that can process both a convolutional layer and fully connected layer with the same structure by reconfiguring the circuit. In addition, we developed tools for pre-evaluation of the performance when a deep neural network (DNN) model is implemented on ReNA. With this approach, we established the flow for the implementation of DNN models on ReNA and evaluated its power consumption. ReNA achieved 1.51TOPS/W in the convolutional layer and 1.38TOPS/W overall in a VGG16 model with a 70% pruning rate.
ER -