[1] 朱帅. 转炉炼钢终点控制技术探究[J]. 冶金管理, 2021(9):3-4.
[2] 程春灵. 试析转炉炼钢的终点控制技术[J]. 黑龙江冶金, 2017,37(2):51-53.
[3] 李仕龙,郭红娟. 转炉炼钢终点控制技术应用现状探讨[J]. 中国金属通报, 2018(12):22.
[4] 安丰涛,郝建标,王文辉. 副枪测量与数据分析自动炼钢技术的应用[J]. 河北冶金, 2019(5):47-50.
[5] 刘辉,张云生,张印辉,等. 基于火焰图像特征与GRNN的转炉吹炼状态识别[J]. 计算机工程与应用, 2011,47(26):7-10.
[6] 刘辉,张云生,张印辉,等. 基于灰度差分统计的火焰图像纹理特征提取[J]. 控制工程, 2013,20(2):213-218.
[7] 李超,刘辉. 改进MTBCD火焰图像特征提取的转炉炼钢终点碳含量预测[J/OL]. 计算机集成制造系统:1-22[2021-10-21]. http://kns.cnki.net/kcms/detail/11.5946.TP.20210428.1806.020.html.
[8] 江帆,刘辉,王彬,等. 基于火焰图像CNN的转炉炼钢吹炼终点判断方法[J]. 计算机工程, 2016,42(10):277-282.
[9] 庞殊杨,王姝洋,贾鸿盛. 基于残差神经网络实现转炉火焰状态识别[J]. 冶金自动化, 2021,45(1):34-43.
[10]HAN Y, ZHANG C J, WANG L, et al. Industrial IoT for intelligent steelmaking with converter mouth flame spectrum information processed by deep learning[J]. IEEE Transaction on Industrical Informatics, 2020,16(4):2640-2650.
[11]张晓光. 转炉炼钢中的炉口火焰线性回归分析[D]. 南京:南京理工大学, 2008.
[12]SIMONYAN K, ZISSERMAN A. Two-stream convolutional networks for action recognition in videos[C]// Neural Information Processing Systems (NIPS). 2014:568-576.
[13]FEICHTENHOFER C, PINZ A, ZISSERMAN A. Convolutional two-stream network fusion for video action recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016:1933-1941.
[14]JI S W, XU W, YANG M, et al. 3D convolutional neural networks for human action recognition[J] IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013,35(1):221-231.
[15]TRAN D, BOURDEV L, FERGUS R, et al. Learning spatiotemporal features with 3D convolutional networks[C]// 2015 IEEE International Conference on Computer Vision (ICCV). 2015:4489-4497.
[16]HARA K, KATAOKA H, SATOH Y. Learning spatio-temporal features with 3D residual networks for action recognition[C]// 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). 2017:3154-3160.
[17]QIU Z F, YAO T, MEI T. Learning spatio-temporal representation with Pseudo-3D residual networks[C]// 2017 IEEE International Conference on Computer Vision. 2017:5534-5542.
[18]HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020,42(8):2011-2023.
[19]WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module[C]// Proceedings of the 2018 European Conference on Computer Vision. 2018:3-19.
[20]GAO Z L, XIE J T, WANG Q L, et al. Global second-order pooling convolutional networks[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019:3019-3028.
[21]HU J, SHEN L, ALBANIE S, et al, Gather-excite: Exploiting feature context in convolutional neural networks[C]// 2018 Neural Information Processing Systems (NIPS). 2018:9423-9433.
[22]WANG X L, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]// Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2018:7794-7803.
[23]WANG Q L, WU B G, ZHU P F, et al. ECA-Net: Efficient channel attention for deep convolutional neural networks[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). 2020:11531-11539.
[24]LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998,86(11):2278-2324.
[25]HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016:770-778.
[26]ZAGORUYKO S, KOMODAKIS N. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer[J]. arXiv preprint arXiv: 1612.03928, 2016.
|