The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Xin LONG(2hit)

1-2hit
  • Channel Pruning via Improved Grey Wolf Optimizer Pruner Open Access

    Xueying WANG  Yuan HUANG  Xin LONG  Ziji MA  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2024/03/07
      Vol:
    E107-D No:7
      Page(s):
    894-897

    In recent years, the increasing complexity of deep network structures has hindered their application in small resource constrained hardware. Therefore, we urgently need to compress and accelerate deep network models. Channel pruning is an effective method to compress deep neural networks. However, most existing channel pruning methods are prone to falling into local optima. In this paper, we propose a channel pruning method via Improved Grey Wolf Optimizer Pruner which called IGWO-Pruner to prune redundant channels of convolutional neural networks. It identifies pruning ratio of each layer by using Improved Grey Wolf algorithm, and then fine-tuning the new pruned network model. In experimental section, we evaluate the proposed method in CIFAR datasets and ILSVRC-2012 with several classical networks, including VGGNet, GoogLeNet and ResNet-18/34/56/152, and experimental results demonstrate the proposed method is able to prune a large number of redundant channels and parameters with rare performance loss.

  • Loss-Driven Channel Pruning of Convolutional Neural Networks

    Xin LONG  Xiangrong ZENG  Chen CHEN  Huaxin XIAO  Maojun ZHANG  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2020/02/17
      Vol:
    E103-D No:5
      Page(s):
    1190-1194

    The increase in computation cost and storage of convolutional neural networks (CNNs) severely hinders their applications on limited-resources devices in recent years. As a result, there is impending necessity to accelerate the networks by certain methods. In this paper, we propose a loss-driven method to prune redundant channels of CNNs. It identifies unimportant channels by using Taylor expansion technique regarding to scaling and shifting factors, and prunes those channels by fixed percentile threshold. By doing so, we obtain a compact network with less parameters and FLOPs consumption. In experimental section, we evaluate the proposed method in CIFAR datasets with several popular networks, including VGG-19, DenseNet-40 and ResNet-164, and experimental results demonstrate the proposed method is able to prune over 70% channels and parameters with no performance loss. Moreover, iterative pruning could be used to obtain more compact network.