Computer and Modernization ›› 2021, Vol. 0 ›› Issue (10): 107-111.

Previous Articles     Next Articles

A Model Compression Algorithm of Convolutional Neural Network

  

  1. (School of Communications and Information Engineering, Xi’an University of Posts and Telecommunications, Xi’an 710121, China)
  • Online:2021-10-14 Published:2021-10-14

Abstract: Convolutional neural network has achieved a series of breakthrough research results, and its superior performance is supported by deep structure. In order to solve the problem of the large amount of redundancy in parameters and computation of complex convolutional neural network, a concise and effective network model compression algorithm is proposed. Firstly, the correlation is judged by calculating the Pearson correlation coefficient between convolution kernels, and the redundant parameters are deleted circularly to compress the convolution layer. Secondly, a local-global fine tuning strategy is adopted to restore the network performance. Finally, a parameter orthogonality regularization is proposed to promote the orthogonalization between convolution kernels and reduce redundant features. The experimental results show that, on the MNIST data set, the compression ratio of the parameters of AlexNet convolutional layer can reach 53.2%, and the calculation amount of the floating point operation can be reduced by 42.8% without losing the test accuracy. In addition, the model has a small error after convergence.

Key words: convolutional neural network, convolution kernel, Pearson correlation coefficient, model compression, orthogonality