With the continued innovation of deep neural networks, spiking neural networks (SNNs) that more closely resemble biological brain synapses have attracted attention because of their low power consumption. Unlike artificial neural networks (ANNs), for continuous data values, they must employ an encoding process to convert the values to spike trains, suppressing the SNN's performance. To avoid this degradation, the incoming analog signal must be regulated prior to the encoding process, which is also realized in living things eg, the basement membranes of humans mechanically perform the Fourier transform. To this end, we combine an ANN and an SNN to build ANN-to-SNN hybrid neural networks (HNNs) that improve the concerned performance. To qualify this performance and robustness, MNIST and CIFAR-10 image datasets are used for various classification tasks in which the training and encoding methods changes. In addition, we present simultaneous and separate training methods for the artificial and spiking layers, considering the encoding methods of each. We find that increasing the number of artificial layers at the expense of spiking layers improves the HNN performance. For straightforward datasets such as MNIST, similar performances as ANN's are achieved by using duplicate coding and separate learning. However, for more complex tasks, the use of Gaussian coding and simultaneous learning is found to improve the accuracy of the HNN while lower power consumption.
Naoya MURAMATSU
University of Cape Town
Hai-Tao YU
University of Tsukuba
Tetsuji SATOH
University of Tsukuba
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Naoya MURAMATSU, Hai-Tao YU, Tetsuji SATOH, "Combining Spiking Neural Networks with Artificial Neural Networks for Enhanced Image Classification" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 2, pp. 252-261, February 2023, doi: 10.1587/transinf.2021EDP7237.
Abstract: With the continued innovation of deep neural networks, spiking neural networks (SNNs) that more closely resemble biological brain synapses have attracted attention because of their low power consumption. Unlike artificial neural networks (ANNs), for continuous data values, they must employ an encoding process to convert the values to spike trains, suppressing the SNN's performance. To avoid this degradation, the incoming analog signal must be regulated prior to the encoding process, which is also realized in living things eg, the basement membranes of humans mechanically perform the Fourier transform. To this end, we combine an ANN and an SNN to build ANN-to-SNN hybrid neural networks (HNNs) that improve the concerned performance. To qualify this performance and robustness, MNIST and CIFAR-10 image datasets are used for various classification tasks in which the training and encoding methods changes. In addition, we present simultaneous and separate training methods for the artificial and spiking layers, considering the encoding methods of each. We find that increasing the number of artificial layers at the expense of spiking layers improves the HNN performance. For straightforward datasets such as MNIST, similar performances as ANN's are achieved by using duplicate coding and separate learning. However, for more complex tasks, the use of Gaussian coding and simultaneous learning is found to improve the accuracy of the HNN while lower power consumption.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7237/_p
Copy
@ARTICLE{e106-d_2_252,
author={Naoya MURAMATSU, Hai-Tao YU, Tetsuji SATOH, },
journal={IEICE TRANSACTIONS on Information},
title={Combining Spiking Neural Networks with Artificial Neural Networks for Enhanced Image Classification},
year={2023},
volume={E106-D},
number={2},
pages={252-261},
abstract={With the continued innovation of deep neural networks, spiking neural networks (SNNs) that more closely resemble biological brain synapses have attracted attention because of their low power consumption. Unlike artificial neural networks (ANNs), for continuous data values, they must employ an encoding process to convert the values to spike trains, suppressing the SNN's performance. To avoid this degradation, the incoming analog signal must be regulated prior to the encoding process, which is also realized in living things eg, the basement membranes of humans mechanically perform the Fourier transform. To this end, we combine an ANN and an SNN to build ANN-to-SNN hybrid neural networks (HNNs) that improve the concerned performance. To qualify this performance and robustness, MNIST and CIFAR-10 image datasets are used for various classification tasks in which the training and encoding methods changes. In addition, we present simultaneous and separate training methods for the artificial and spiking layers, considering the encoding methods of each. We find that increasing the number of artificial layers at the expense of spiking layers improves the HNN performance. For straightforward datasets such as MNIST, similar performances as ANN's are achieved by using duplicate coding and separate learning. However, for more complex tasks, the use of Gaussian coding and simultaneous learning is found to improve the accuracy of the HNN while lower power consumption.},
keywords={},
doi={10.1587/transinf.2021EDP7237},
ISSN={1745-1361},
month={February},}
Copy
TY - JOUR
TI - Combining Spiking Neural Networks with Artificial Neural Networks for Enhanced Image Classification
T2 - IEICE TRANSACTIONS on Information
SP - 252
EP - 261
AU - Naoya MURAMATSU
AU - Hai-Tao YU
AU - Tetsuji SATOH
PY - 2023
DO - 10.1587/transinf.2021EDP7237
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 2
JA - IEICE TRANSACTIONS on Information
Y1 - February 2023
AB - With the continued innovation of deep neural networks, spiking neural networks (SNNs) that more closely resemble biological brain synapses have attracted attention because of their low power consumption. Unlike artificial neural networks (ANNs), for continuous data values, they must employ an encoding process to convert the values to spike trains, suppressing the SNN's performance. To avoid this degradation, the incoming analog signal must be regulated prior to the encoding process, which is also realized in living things eg, the basement membranes of humans mechanically perform the Fourier transform. To this end, we combine an ANN and an SNN to build ANN-to-SNN hybrid neural networks (HNNs) that improve the concerned performance. To qualify this performance and robustness, MNIST and CIFAR-10 image datasets are used for various classification tasks in which the training and encoding methods changes. In addition, we present simultaneous and separate training methods for the artificial and spiking layers, considering the encoding methods of each. We find that increasing the number of artificial layers at the expense of spiking layers improves the HNN performance. For straightforward datasets such as MNIST, similar performances as ANN's are achieved by using duplicate coding and separate learning. However, for more complex tasks, the use of Gaussian coding and simultaneous learning is found to improve the accuracy of the HNN while lower power consumption.
ER -