[1] 张开延,潘杨,娄季朝. 基于ANP-SVM算法的智能变电站过程层网络故障分类[J]. 计算机与现代化, 2019(7):72-77.
[2] 田宇,李宇,谢佳. 基于视频确认的变电站顺控操作系统[J]. 计算机与现代化, 2019(5):74-79.
[3] 刘建明,施明泰,庄玉琳,等. 增强现实、虚拟现实和混合现实技术在电力系统的应用研究[J]. 电力信息与通信技术, 2017,15(4):4-11.
[4] 尹宏鹏,陈波,柴毅,等. 基于视觉的目标检测与跟踪综述[J]. 自动化学报, 2016,42(10):1466-1489.
[5] 杨佳东. 基于控制点的平面目标跟踪算法及其在增强现实中的应用[D]. 重庆:重庆邮电大学, 2018.
[6] 李鑫,曾梓浩,朱凌寒. 一种基于速度预测的双目视觉跟踪混合算法[J]. 自动化仪表, 2018,39(4):79-83.
[7] 贾静平,覃亦华. 基于深度学习的视觉跟踪算法研究综述[J]. 计算机科学, 2017,44(S1):19-23.
[8] KLEIN G, MURRAY D. Parallel tracking and mapping for small AR workspaces[C]// Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. 2007:225-234.
[9] ENGEL J, SCHOPS T, CREMERS D. LSD-SLAM: Large-scale direct monocular SLAM[C]// Proceedings of the 2014 Europe Conference on Computer Vision. 2014:834-849.
[10]LEUTENEGGER S, LYNEN S, BOSSE M, et al. Keyframe-based visual-inertial odometry using nonlinear optimization[J]. International Journal of Robotics Research, 2015,34(3):314-334.
[11]TAN W, LIU H M, DONG Z L, et al. Robust monocular SLAM in dynamic environments[C]// Proceedings of the 2013 IEEE International Symposium on Mixed and Augment Reality (ISMAR). 2013:209-218.
[12]QIN T, LI P L, SHEN S J. VINS-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018,34(4):1004-1020.
[13]ALLAK E, HARDT-STREMAYR A, WEISS S. Key-frame strategy during fast image-scale changes and zero motion in VIO without persistent features[C]// Proceedings of the 2018 IEEE/RSJ International Conference on Intelligence Robots and Systems (IROS). 2018:6872-6879.
[14]MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORB-SLAM: A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015,31(5):1147-1163.
[15]ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018,40(3):611-625.
[16]YOUNES G, ASMAR D, SHAMMAS E, et al. Keyframe-based monocular SLAM: Design, survey, and future directions[J]. Robotics and Autonomous Systems, 2017,98:67-88.
[17]CHAO H Y, GU Y, NAPOLITANO M. A survey of optical flow techniques for robotics navigation applications[J]. Journal of Intelligent and Robotic Systems, 2014,73(1-4):361-372.
[18]YOUSIF K, BAB-HADIASHAR A, HOSEINNEZHAD R. An overview to visual odometry and visual SLAM: Applications to mobile robotics[J]. Intelligent Industrial Systems, 2015,1(4):289-311.
[19]GAO X, ZHANG T, LIU Y, et al. 14 Lectures on Visual SLAM: From Theory to Practice[M]. Publishing House of Electronics Industry, 2017:17-22.
[20]ROTH S, BLACK M J. On the spatial statistics of optical flow[J]. International Journal of Computer Vision, 2007,74(1):33-50.
[21]HOWARD A. Real-time stereo visual odometry for autonomous ground vehicles[C]// Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2008:3946-3952.
[22]曹若琛,陈靖,王涌天. 一种面向移动终端的混合跟踪定位算法[J]. 太原理工大学学报, 2016,47(4):506-512.
[23]GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]// Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. 2012:3354-3361.
|