[1] Arel L, Rose D C, Karnowski T P. Deep machine learning a new frontier in artificial intelligence research[J].
Computational Intelligence Magazine, 2010,5(4):13-18.
[2] Hinton G E, Osinder S, Teh Y W. A fast learning algorithm for deep belief nets[J]. Neural Computation,
2006,18(7):1527-1554.
[3] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors[J]. Nature,
1986,323:533-536.
[4] Bengio Y, Lamblin P, Popovici D, et al. Greedy layer-wise training of deep networks[C]. Proc. of the 20th
Annual Conference on Neural Information Processing System. 2006:153-160.
[5] Vincent p, Larochelle H, Bengio Y. Extracting and composing robust features with denoising auto-encoders[C
]. Proc. of the 25th International Conference on Machine Learning. 2008:1096-1103.
[6] Bengio Y. Learning deep architectures for AI[J]. Foundations and Trends in Machine Learning, 2009,2(1):1
-127.
[7] Salah R, Vincent P, Muller X, et al. Contractive auto-encoders: Explicit invariance during feature
extraction[C]. Proc. of the 28th International Conference on Machine Learning. 2011:833-840.
[8] Masci J, Meier U, Ciresan D. Stacked convolutional auto-encoders for hierarchical feature extraction[C].
Proc. of the 21th International Conference on Artifical Neural Networks. 2011,6791:52-59.
[9] Guyon I, Dror G, Lemaire V, et al. Auto-encoders unsupervised learning and deep architectures[C]. Proc.
of the 28th International Conference on Machine Learning, 2012:37-50.
[10]Vincent P, Larochelle H, Lajoie I, et al. Stacked denoising auto-encoders: Learning useful representations
in a deep network with a local denoising criterion[J]. Joural of Machine Learning Research, 2010,11:3371-3408.
[11]Mitchell B, Sheppard J. Deep structure learning: Beyond connectionist approaches[C]. Proc. of the 11th
International Conference on Machine Learning and Applications. 2012:162-167.
[12]Erhan D, Bengio Y, Couville A, et al. Why does unsupervised pre-training help deep learning[J]. Journal
of Machine Learning Research, 2010,11:625-660.
[13]Li Deng, Selzer M L, Yu Dong, et al. Binary coding of speech spectrograms using a deep auto-encoder[C].
Proc. of the 11th International Speech Communication Association. 2010:1692-1695.
[14]Lee H, kanadhamk C E. A sparse deep belief net model for visual area[C]. Proc. of the 21th Neural
Information Process Systems International Conference. 2007:873-880.
[15]Amaral T, Silva L M, Alexande L A, et al. Using different cost functions to train stacked auto- encoders[
C]. Proc. of the 12th Mexican International Conference on Artificial Intelligence. 2013:114-120.
[16]Bengio Y, Delalleau O. On the expressive power of deep architectures[C]. Proc. of the 22nd International
Conference on Algorithmic Learning Theory. 2011:18-36.
[17]Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks[J]. Science,
2006,313:504-507.
[18]Gehring J, Miao Yajie, Metze F, et al. Extracting deep bottleneck features using stacked auto-encoders[C
]. Proc. of the 26th IEEE International Conference on Acoustics, Speech and Signal Processing. 2013:3377-3381.
[19]Lange S, Riedmiller M. Deep auto-encoder neural networks in reinforcement learning lange[C]. Proc. of
the International Joint Conference on Neural Networks. 2010:18-23.
[20]Suk H, Lee S W, Shen Dinggang. Latent feature representation with stacked auto-encoder for AD/MCI
diagnosis[J]. Brain Structure Function, 2013,218(6):1017-1036.
[21]Iamsat S, Horata P. Handwritten Character Recognition Using Histograms of Oriented Gradient Features in
Deep Learning of Artificial Neural Network[C]. Proc. of the 3rd International Conference on IT Convergence and
Security. 2013:1-5.
[22]Gupta M, Lam S M. Weight decay back propagation for noisy data[J]. Neural Network, 1998,11(6):1127-1137.
[23]李海峰,李纯果. 深度学习结构和算法比较分析[J]. 河北大学学报(自然科学版), 2012,32(5):538-543.
[24]Luo Xuxi, Li Wan. A novel efficient method for training sparse auto-encoders[C]. Proc. of the 6th
International Congress on Image and Signal Processing. 2013:1019-1023.
[25]Deng Jun, Zhang Zixing, Erik M, et al. Sparse auto-encoder based feature transfer learning for speech
emotion recognition[C]. Proc. of Humaine Association Conference on Affective Computing and Intelligent
Interaction. 2013:511-516.
[26]Ma Yunlong, Zhang Peng, Gao Yanan. Parallel auto-encoder for efficient outlier detection[C]// Proceeding
of IEEE International Conference on Big Data. 2013:15-17.
[27]张开旭,周昌乐. 基于自动编码器的中文词汇特征无监督学[J]. 中文信息学报, 2013,27(5):1-7,92.
[28]Y Yang, S Guang, Shah M. Semi-supervised learning of feature hierarchies for object detection in a video[
C]// Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition. 2013:1650-1657.
[29]Wong W K, S Mingming. Deep learning regularized fisher mapping[J]. Neural Networks, 2011,22(10):1668-
1675.
[30]C Minmin, Killan Q W, Xu Zhixiang. Marginalized denoising auto-encoders for domain adaptation[C]. Proc.
of the 29th International Conference on Machine Learning. 2012:538-542.
[31]Q Hao, Y Zhe, Z Yajin. A new training principle for stacked denoising auto-encoders[C]. Proc. of the 7th
International Conference on Image and Graphics. 2013:384-389.
[32]Schulz H, Cho K, Raiko T, et al. Two layers contractive encoding with shortcuts for semi-supervised
learning[C]. Proc. of the 20th International Conference on Neural Information Processing. 2013:450-457.
[33]Gutmann M, Hyvarinen A. Noise-contrastive estimation: a new estimation principle for un-normalized
statistical models[C]. Proc. of the 13th International Conference on Artificial Intelligence and Statistics.
2010:297-304.
[34]Firat O, Vural F T Y. Representation learning with convolutional sparse auto-encoders for remote sensing[
C]. Proc. of the 21st Signal Processing and Communications Applications Conference. 2013:24-26.
[35]孙志军,薛磊,许阳明. 深度学习研究综述[J]. 计算机应用研究, 2012,29(8):2806-2810.
[36]庄永文. 关于系数编码理论及其应用[J]. 现代电子技术, 2008(7):157-160.
[37]刘海宁. 基于稀疏编码的设备状态识别及其重型轧辊磨床监测应用[D]. 上海:上海交通大学, 2011.
[38]孙志军,薛磊,许阳明. 基于深度学习的边际Fisher分析特征提取算法[J]. 电子与信息报, 2013,35(4):805-811.
[39]殷力昂. 一种在深度结构中学习原型的分类方法[D]. 上海:上海交通大学, 2012.
[40]Hinton G E, Krizhevsky A, Wang S. Transfor-ming auto-encoders[C]. Proc. of Internet Corporation for
Assigned Names and Numbers. 2011:387-396.
[41]夏丁胤. 互联网图像高效标注和解译的关键技术研究[D]. 杭州:浙江大学, 2010.
[42]Hinton G E, Li Deng, Dahl G E. Deep neural networks for acoustic modeling in speech recognition: The
shared views of four research groups[J]. Signal Processing, Magazine, 2012, 29(6), 82-97.
[43]Guyon G, Dror V, Lemaire G. Auto-encoders, unsupervised learning, and deep architecture[J]. Journal of
Machine Learning Research, 2012,13:37-49. |