The increase in computation cost and storage of convolutional neural networks (CNNs) severely hinders their applications on limited-resources devices in recent years. As a result, there is impending necessity to accelerate the networks by certain methods. In this paper, we propose a loss-driven method to prune redundant channels of CNNs. It identifies unimportant channels by using Taylor expansion technique regarding to scaling and shifting factors, and prunes those channels by fixed percentile threshold. By doing so, we obtain a compact network with less parameters and FLOPs consumption. In experimental section, we evaluate the proposed method in CIFAR datasets with several popular networks, including VGG-19, DenseNet-40 and ResNet-164, and experimental results demonstrate the proposed method is able to prune over 70% channels and parameters with no performance loss. Moreover, iterative pruning could be used to obtain more compact network.
Xin LONG
National University of Defense Technology
Xiangrong ZENG
National University of Defense Technology
Chen CHEN
National University of Defense Technology
Huaxin XIAO
National University of Defense Technology
Maojun ZHANG
National University of Defense Technology
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Xin LONG, Xiangrong ZENG, Chen CHEN, Huaxin XIAO, Maojun ZHANG, "Loss-Driven Channel Pruning of Convolutional Neural Networks" in IEICE TRANSACTIONS on Information,
vol. E103-D, no. 5, pp. 1190-1194, May 2020, doi: 10.1587/transinf.2019EDL8200.
Abstract: The increase in computation cost and storage of convolutional neural networks (CNNs) severely hinders their applications on limited-resources devices in recent years. As a result, there is impending necessity to accelerate the networks by certain methods. In this paper, we propose a loss-driven method to prune redundant channels of CNNs. It identifies unimportant channels by using Taylor expansion technique regarding to scaling and shifting factors, and prunes those channels by fixed percentile threshold. By doing so, we obtain a compact network with less parameters and FLOPs consumption. In experimental section, we evaluate the proposed method in CIFAR datasets with several popular networks, including VGG-19, DenseNet-40 and ResNet-164, and experimental results demonstrate the proposed method is able to prune over 70% channels and parameters with no performance loss. Moreover, iterative pruning could be used to obtain more compact network.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2019EDL8200/_p
Copy
@ARTICLE{e103-d_5_1190,
author={Xin LONG, Xiangrong ZENG, Chen CHEN, Huaxin XIAO, Maojun ZHANG, },
journal={IEICE TRANSACTIONS on Information},
title={Loss-Driven Channel Pruning of Convolutional Neural Networks},
year={2020},
volume={E103-D},
number={5},
pages={1190-1194},
abstract={The increase in computation cost and storage of convolutional neural networks (CNNs) severely hinders their applications on limited-resources devices in recent years. As a result, there is impending necessity to accelerate the networks by certain methods. In this paper, we propose a loss-driven method to prune redundant channels of CNNs. It identifies unimportant channels by using Taylor expansion technique regarding to scaling and shifting factors, and prunes those channels by fixed percentile threshold. By doing so, we obtain a compact network with less parameters and FLOPs consumption. In experimental section, we evaluate the proposed method in CIFAR datasets with several popular networks, including VGG-19, DenseNet-40 and ResNet-164, and experimental results demonstrate the proposed method is able to prune over 70% channels and parameters with no performance loss. Moreover, iterative pruning could be used to obtain more compact network.},
keywords={},
doi={10.1587/transinf.2019EDL8200},
ISSN={1745-1361},
month={May},}
Copy
TY - JOUR
TI - Loss-Driven Channel Pruning of Convolutional Neural Networks
T2 - IEICE TRANSACTIONS on Information
SP - 1190
EP - 1194
AU - Xin LONG
AU - Xiangrong ZENG
AU - Chen CHEN
AU - Huaxin XIAO
AU - Maojun ZHANG
PY - 2020
DO - 10.1587/transinf.2019EDL8200
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E103-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2020
AB - The increase in computation cost and storage of convolutional neural networks (CNNs) severely hinders their applications on limited-resources devices in recent years. As a result, there is impending necessity to accelerate the networks by certain methods. In this paper, we propose a loss-driven method to prune redundant channels of CNNs. It identifies unimportant channels by using Taylor expansion technique regarding to scaling and shifting factors, and prunes those channels by fixed percentile threshold. By doing so, we obtain a compact network with less parameters and FLOPs consumption. In experimental section, we evaluate the proposed method in CIFAR datasets with several popular networks, including VGG-19, DenseNet-40 and ResNet-164, and experimental results demonstrate the proposed method is able to prune over 70% channels and parameters with no performance loss. Moreover, iterative pruning could be used to obtain more compact network.
ER -