We propose an efficient training method for memristor neural networks. The proposed method is suitable for the mini-batch-based training, which is a common technique for various neural networks. By integrating the two processes of gradient calculation in the backpropagation algorithm and weight update in the write operation to the memristors, the proposed method accelerates the training process and also eliminates the external computing resources required in the existing method, such as multipliers and memories. Through numerical experiments, we demonstrated that the proposed method achieves twice faster convergence of the training process than the existing method, while retaining the same level of the accuracy for the classification results.
Satoshi YAMAMORI
Kyoto University
Masayuki HIROMOTO
Kyoto University
Takashi SATO
Kyoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Satoshi YAMAMORI, Masayuki HIROMOTO, Takashi SATO, "Efficient Mini-Batch Training on Memristor Neural Network Integrating Gradient Calculation and Weight Update" in IEICE TRANSACTIONS on Fundamentals,
vol. E101-A, no. 7, pp. 1092-1100, July 2018, doi: 10.1587/transfun.E101.A.1092.
Abstract: We propose an efficient training method for memristor neural networks. The proposed method is suitable for the mini-batch-based training, which is a common technique for various neural networks. By integrating the two processes of gradient calculation in the backpropagation algorithm and weight update in the write operation to the memristors, the proposed method accelerates the training process and also eliminates the external computing resources required in the existing method, such as multipliers and memories. Through numerical experiments, we demonstrated that the proposed method achieves twice faster convergence of the training process than the existing method, while retaining the same level of the accuracy for the classification results.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E101.A.1092/_p
Copy
@ARTICLE{e101-a_7_1092,
author={Satoshi YAMAMORI, Masayuki HIROMOTO, Takashi SATO, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Efficient Mini-Batch Training on Memristor Neural Network Integrating Gradient Calculation and Weight Update},
year={2018},
volume={E101-A},
number={7},
pages={1092-1100},
abstract={We propose an efficient training method for memristor neural networks. The proposed method is suitable for the mini-batch-based training, which is a common technique for various neural networks. By integrating the two processes of gradient calculation in the backpropagation algorithm and weight update in the write operation to the memristors, the proposed method accelerates the training process and also eliminates the external computing resources required in the existing method, such as multipliers and memories. Through numerical experiments, we demonstrated that the proposed method achieves twice faster convergence of the training process than the existing method, while retaining the same level of the accuracy for the classification results.},
keywords={},
doi={10.1587/transfun.E101.A.1092},
ISSN={1745-1337},
month={July},}
Copy
TY - JOUR
TI - Efficient Mini-Batch Training on Memristor Neural Network Integrating Gradient Calculation and Weight Update
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1092
EP - 1100
AU - Satoshi YAMAMORI
AU - Masayuki HIROMOTO
AU - Takashi SATO
PY - 2018
DO - 10.1587/transfun.E101.A.1092
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E101-A
IS - 7
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - July 2018
AB - We propose an efficient training method for memristor neural networks. The proposed method is suitable for the mini-batch-based training, which is a common technique for various neural networks. By integrating the two processes of gradient calculation in the backpropagation algorithm and weight update in the write operation to the memristors, the proposed method accelerates the training process and also eliminates the external computing resources required in the existing method, such as multipliers and memories. Through numerical experiments, we demonstrated that the proposed method achieves twice faster convergence of the training process than the existing method, while retaining the same level of the accuracy for the classification results.
ER -