[1] FILIP R, GIORGOS T, ONDREJ C. Fine-tuning CNN image retrieval with no human annotation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018,41(7):1655-1668.
[2] KAVITHA M L, DEEPAMBIKA V A. Review on loop closure detection of visual SLAM[J]. International Journal of Current Engineering and Scientific Research, 2018,6(6):81-86.
[3] BABENKO A V, VICTOR L. Aggregating local deep features for image retrieval[C]// 2016 IEEE International Conference on Computer Vision. 2016:1269-1277.
[4] TOLIAS G, SICRE R, JGOU H. Particular object retrieval with integral max-pooling of CNN activations[J]. Computer Science, 2015,1(1):1-18.
[5] BABENKO A, SLESAREV A, CHIGORIN A, et al. Neural codes for image retrieval[C]// 2014 European Conference on Computer Vision. 2014:584-599.
[6] BAI D D, WANG C Q, ZHANG B, et al. CNN feature boosted SeqSLAM for real-time loop closure detection[J]. Chinese Journal of Electronics, 2018,27(3):488-499.
[7] GAO X, ZHANG T. Unsupervised learning to detect loops using deep neural networks for visual SLAM system[J]. Autonomous Robots, 2017,41(1):1-18.
[8] MERRILL N, HUANG G. Lightweight unsupervised deep loop closure[J]. Robotics, 2018:arXiv:1805.07703.
[9] LIU H, ZHAO C Y, HUANG W P, et al. An end-to-end Siamese convolutional neural network for loop closure detection in visual slam system[C]// International Conference on Acoustics, Speech, and Signal Processing. 2018:3121-3125.
[10]CHEN Y H, KRISHNA T, EMER J S, et al. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks[J]. IEEE Journal of Solid-State Circuits, 2017,52(1):127-138.
[11]ZHANG C, LI P, SUN G Y, et al. Optimizing FPGA-based accelerator design for deep convolutional neural networks[C]// Proceedings of 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. 2015:161-170.
[12]HAN S, MAO H Z, DALLY W J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding[J]. Computer Vision and Pattern Recognition, 2015:arXiv:1510.00149.
[13]HAN S, POOL J, TRAN J, et al. Learning both weights and connections for efficient neural network[C]// Advances in Neural Information Processing Systems. 2015:1135-1143.
[14]PAGE A, MOHSENIN T. FPGA-based reduction techniques for efficient deep neural network development[C]// 2016 IEEE 24th Annual International Symposium on Field-Programmable Custom Computing Machines. 2016:200.
[15]GUPTA S, AGRAWAL A, GOPALAKRISHNAN K, et al. Deep learning with limited numerical precision[C]// 2015 International Conference on Machine Learning. 2015:1737-1746.
[16]HILL P, ZAMIRAI B, LU S, et al. Rethinking numerical representations for deep neural networks[J]. Machine Learning, 2018:arXiv:1808.02513.
[17]MELLEMPUDI N, KUNDU A, DAS D, et al. Mixed low-precision deep learning inference using dynamic fixed point[J]. Machine Learning, 2017:arXiv:1701.08978.
[18]MEI C S, LIU Z Y, NIU Y, et al. A 200MHz 202.4 GFLOPS@ 10.8 W VGG16 accelerator in Xilinx VX690T[C]// 2017 IEEE Global Conference on Signal and Information Processing. 2017:784-788.
[19]KALLIOJARVI K, ASTOLA J. Roundoff errors in block-floating-point systems[J]. IEEE Transactions on Signal Processing, 1996,44(4):783-790.
[20]SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. Computer Vision and Pattern Recognition, 2014:arXiv:1409.1556.
[21]LEE C Y, GALLAGHER P W, TU Z. Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree[C]// Artificial Intelligence and Statistics. 2016:464-472.
[22]JIA Y Q, SHELHAMER E, DONAHUE J, et al. Caffe: Convolutional architecture for fast feature embedding[C]// Proceedings of the 22nd ACM International Conference on Multimedia. 2014:675-678.
[23]DENG J, DONG W, SOCHER R, et al.ImageNet: A large-scale hierarchical image database[C]// 2009 IEEE Conference on Computer Vision and Pattern Recognition. 2009:248-255.
[24]KALANTIDIS Y, MELLINA C, OSINDERO S. Cross-dimensional weighting for aggregated deep convolutional features[C]// European Conference on Computer Vision. 2016:685-701. |