Computer and Modernization ›› 2014, Vol. 0 ›› Issue (8): 128-134.doi: 10.3969/j.issn.1006-2475.2014.08.028
Previous Articles Next Articles
Received:
2014-05-20
Online:
2014-08-15
Published:
2014-08-19
CLC Number:
QU Jian-ling, DU Chen-fei, DI Ya-zhou, GAO Feng, GUO Chao-ran. Research and Prospect of Deep Auto-encoders[J]. Computer and Modernization, 2014, 0(8): 128-134.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.c-a-m.org.cn/EN/10.3969/j.issn.1006-2475.2014.08.028
[1] Arel L, Rose D C, Karnowski T P. Deep machine learning a new frontier in artificial intelligence research[J].
Computational Intelligence Magazine, 2010,5(4):13-18.
[2] Hinton G E, Osinder S, Teh Y W. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2006,18(7):1527-1554.
[3] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors[J]. Nature, 1986,323:533-536.
[4] Bengio Y, Lamblin P, Popovici D, et al. Greedy layer-wise training of deep networks[C]. Proc. of the 20th Annual Conference on Neural Information Processing System. 2006:153-160.
[5] Vincent p, Larochelle H, Bengio Y. Extracting and composing robust features with denoising auto-encoders[C ]. Proc. of the 25th International Conference on Machine Learning. 2008:1096-1103.
[6] Bengio Y. Learning deep architectures for AI[J]. Foundations and Trends in Machine Learning, 2009,2(1):1 -127.
[7] Salah R, Vincent P, Muller X, et al. Contractive auto-encoders: Explicit invariance during feature extraction[C]. Proc. of the 28th International Conference on Machine Learning. 2011:833-840.
[8] Masci J, Meier U, Ciresan D. Stacked convolutional auto-encoders for hierarchical feature extraction[C]. Proc. of the 21th International Conference on Artifical Neural Networks. 2011,6791:52-59.
[9] Guyon I, Dror G, Lemaire V, et al. Auto-encoders unsupervised learning and deep architectures[C]. Proc. of the 28th International Conference on Machine Learning, 2012:37-50.
[10]Vincent P, Larochelle H, Lajoie I, et al. Stacked denoising auto-encoders: Learning useful representations in a deep network with a local denoising criterion[J]. Joural of Machine Learning Research, 2010,11:3371-3408.
[11]Mitchell B, Sheppard J. Deep structure learning: Beyond connectionist approaches[C]. Proc. of the 11th International Conference on Machine Learning and Applications. 2012:162-167.
[12]Erhan D, Bengio Y, Couville A, et al. Why does unsupervised pre-training help deep learning[J]. Journal of Machine Learning Research, 2010,11:625-660.
[13]Li Deng, Selzer M L, Yu Dong, et al. Binary coding of speech spectrograms using a deep auto-encoder[C]. Proc. of the 11th International Speech Communication Association. 2010:1692-1695.
[14]Lee H, kanadhamk C E. A sparse deep belief net model for visual area[C]. Proc. of the 21th Neural Information Process Systems International Conference. 2007:873-880.
[15]Amaral T, Silva L M, Alexande L A, et al. Using different cost functions to train stacked auto- encoders[ C]. Proc. of the 12th Mexican International Conference on Artificial Intelligence. 2013:114-120.
[16]Bengio Y, Delalleau O. On the expressive power of deep architectures[C]. Proc. of the 22nd International Conference on Algorithmic Learning Theory. 2011:18-36.
[17]Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006,313:504-507.
[18]Gehring J, Miao Yajie, Metze F, et al. Extracting deep bottleneck features using stacked auto-encoders[C ]. Proc. of the 26th IEEE International Conference on Acoustics, Speech and Signal Processing. 2013:3377-3381.
[19]Lange S, Riedmiller M. Deep auto-encoder neural networks in reinforcement learning lange[C]. Proc. of the International Joint Conference on Neural Networks. 2010:18-23.
[20]Suk H, Lee S W, Shen Dinggang. Latent feature representation with stacked auto-encoder for AD/MCI diagnosis[J]. Brain Structure Function, 2013,218(6):1017-1036.
[21]Iamsat S, Horata P. Handwritten Character Recognition Using Histograms of Oriented Gradient Features in Deep Learning of Artificial Neural Network[C]. Proc. of the 3rd International Conference on IT Convergence and
Security. 2013:1-5.
[22]Gupta M, Lam S M. Weight decay back propagation for noisy data[J]. Neural Network, 1998,11(6):1127-1137.
[23]李海峰,李纯果. 深度学习结构和算法比较分析[J]. 河北大学学报(自然科学版), 2012,32(5):538-543. [24]Luo Xuxi, Li Wan. A novel efficient method for training sparse auto-encoders[C]. Proc. of the 6th International Congress on Image and Signal Processing. 2013:1019-1023.
[25]Deng Jun, Zhang Zixing, Erik M, et al. Sparse auto-encoder based feature transfer learning for speech emotion recognition[C]. Proc. of Humaine Association Conference on Affective Computing and Intelligent
Interaction. 2013:511-516.
[26]Ma Yunlong, Zhang Peng, Gao Yanan. Parallel auto-encoder for efficient outlier detection[C]// Proceeding of IEEE International Conference on Big Data. 2013:15-17.
[27]张开旭,周昌乐. 基于自动编码器的中文词汇特征无监督学[J]. 中文信息学报, 2013,27(5):1-7,92. [28]Y Yang, S Guang, Shah M. Semi-supervised learning of feature hierarchies for object detection in a video[ C]// Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition. 2013:1650-1657.
[29]Wong W K, S Mingming. Deep learning regularized fisher mapping[J]. Neural Networks, 2011,22(10):1668- 1675.
[30]C Minmin, Killan Q W, Xu Zhixiang. Marginalized denoising auto-encoders for domain adaptation[C]. Proc. of the 29th International Conference on Machine Learning. 2012:538-542.
[31]Q Hao, Y Zhe, Z Yajin. A new training principle for stacked denoising auto-encoders[C]. Proc. of the 7th International Conference on Image and Graphics. 2013:384-389.
[32]Schulz H, Cho K, Raiko T, et al. Two layers contractive encoding with shortcuts for semi-supervised learning[C]. Proc. of the 20th International Conference on Neural Information Processing. 2013:450-457.
[33]Gutmann M, Hyvarinen A. Noise-contrastive estimation: a new estimation principle for un-normalized statistical models[C]. Proc. of the 13th International Conference on Artificial Intelligence and Statistics.
2010:297-304.
[34]Firat O, Vural F T Y. Representation learning with convolutional sparse auto-encoders for remote sensing[ C]. Proc. of the 21st Signal Processing and Communications Applications Conference. 2013:24-26.
[35]孙志军,薛磊,许阳明. 深度学习研究综述[J]. 计算机应用研究, 2012,29(8):2806-2810. [36]庄永文. 关于系数编码理论及其应用[J]. 现代电子技术, 2008(7):157-160. [37]刘海宁. 基于稀疏编码的设备状态识别及其重型轧辊磨床监测应用[D]. 上海:上海交通大学, 2011. [38]孙志军,薛磊,许阳明. 基于深度学习的边际Fisher分析特征提取算法[J]. 电子与信息报, 2013,35(4):805-811. [39]殷力昂. 一种在深度结构中学习原型的分类方法[D]. 上海:上海交通大学, 2012. [40]Hinton G E, Krizhevsky A, Wang S. Transfor-ming auto-encoders[C]. Proc. of Internet Corporation for Assigned Names and Numbers. 2011:387-396.
[41]夏丁胤. 互联网图像高效标注和解译的关键技术研究[D]. 杭州:浙江大学, 2010. [42]Hinton G E, Li Deng, Dahl G E. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups[J]. Signal Processing, Magazine, 2012, 29(6), 82-97.
[43]Guyon G, Dror V, Lemaire G. Auto-encoders, unsupervised learning, and deep architecture[J]. Journal of Machine Learning Research, 2012,13:37-49. |
[1] | HE Sida, CHEN Pinghua. Intent-based Lightweight Self-Attention Network for Sequential Recommendation [J]. Computer and Modernization, 2024, 0(12): 1-9. |
[2] | ZHANG Xiaodong1, BAI Guangzhi1, LI Min1, LI Haoyang2. Oil and Gas Well Production Prediction Model Based on Empirical Wavelet Transform [J]. Computer and Modernization, 2024, 0(12): 53-58. |
[3] | LIU Baobao, YANG Jingjing, TAO Lu, WANG Heying . DSMSC Based on Attention Mechanism for Remote Sensing Image Scene Classification [J]. Computer and Modernization, 2024, 0(12): 72-77. |
[4] | CHEN Liang, LI Cheng, YI Wei, XIONG Wei, WANG Xiaofan, TANG Haidong. Helmet Wearing Detection in Electric Power Field Based on#br# Millimeter-wave Radar and Visual Fusion [J]. Computer and Modernization, 2024, 0(12): 100-107. |
[5] | QI Xian, LIU Daming, CHANG Jiaxin. Multi-view 3D Reconstruction Based on Improved Self-attention Mechanism [J]. Computer and Modernization, 2024, 0(11): 106-112. |
[6] | CHEN Kai1, LI Yiting1, 2, QUAN Huafeng1. A River Discarded Bottles Detection Method Based on Improved YOLOv8 [J]. Computer and Modernization, 2024, 0(11): 113-120. |
[7] | YANG Jun1, HU Wei1, ZHU Wenfu2. Visual SLAM Loop Closure Detection Algorithm Based on Improved MobileNetV3 [J]. Computer and Modernization, 2024, 0(10): 21-26. |
[8] | WANG Yingying, HAO Xiao. Fine-grained Image Classification Based on Res2Net and Recursive Gated Convolution [J]. Computer and Modernization, 2024, 0(10): 74-79. |
[9] | SHI Xingyu1, LI Qiang2, ZHUANG Li3, LIANG Yi3, WANG Qiulin3, CHEN Kai3, WU Chenzhou3, CHANG Sheng1. Object Detection Models Distillation Technique for Industrial Deployment [J]. Computer and Modernization, 2024, 0(10): 93-99. |
[10] | MA Yu, YANG Yong, REN Ge, Palidan Tuerxun. Automated Essay Scoring Method Based on GCN and Fine Tuned BERT [J]. Computer and Modernization, 2024, 0(09): 33-37. |
[11] | ZHANG Ze1, ZHANG Jianquan2, 3, ZHOU Guopeng2, 3. Camera Module Defect Detection Based on Improved YOLOv8s [J]. Computer and Modernization, 2024, 0(09): 107-113. |
[12] | CHENG Yazi1, LEI Liang1, 2, CHEN Han1, ZHAO Yiran1. Multi-scale Depth Fusion Monocular Depth Estimation Based on Transposed Attention [J]. Computer and Modernization, 2024, 0(09): 121-126. |
[13] | CHENG Meng, LI Hao. Improved Deciduous Tree Nest Detection Method Based on YOLOv5s [J]. Computer and Modernization, 2024, 0(08): 24-29. |
[14] | WANG Mengxi, LI Jun. Review of Fall Detection Technologies for Elderly [J]. Computer and Modernization, 2024, 0(08): 30-36. |
[15] | SHI Xianwei1, FAN Xin2. Semantic Segmentation of Video Frame Scene Based on Lightweight [J]. Computer and Modernization, 2024, 0(08): 49-53. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||