Computer and Modernization ›› 2017, Vol. 0 ›› Issue (2): 22-29+35.doi: 10.3969/j.issn.1006-2475.2017.02.005

Previous Articles     Next Articles

Convolutional Sparse Autoencoder Neural Networks

  

  1. (College of Computer and Information, Hohai University, Nanjing 210098, China)
  • Received:2016-10-18 Online:2017-03-09 Published:2017-03-20

Abstract:

Convolutional neural network is a hotspot in the research of image recognition field. This paper proposes a convolutional sparse autoencoder neural network (CSAENN) to improve and simplify the existing convolutional autoencoder. Firstly, the traditional deconvolution methods are replaced with zero padding around feature maps. Compared with the traditional deconvolution methods, our approach reduces the complexity and has little affect on feature extraction and reorganization. Secondly, only the weights of encoders are updated during training and those of decoders are set to be the transpose of the encoders’ weights. This setting can establish a relationship between the weights of both encoders and decoders and realize feature extraction as well as sample reorganization with the same weights that can be regarded as well pre-trained. Finally, in order to improve the network performance, the techniques of population sparsity, lifetime sparsity and high dispersal are applied to encoders to make the weights and outputs sparser. Experimental comparison results on the MNIST and CIFAR10 datasets demonstrate that CSAENN has better performance.

Key words: convolutional neural network, sparse autoencoder, deconvolution, population sparsity, lifetime sparsity, high dispersal sparsity

CLC Number: