[1] BUSTA M, NEUMANN L, MATAS J. Fastext: Efficient unconstrained scene text detector[C]// Proceedings of the IEEE International Conference on Computer Vision. 2015:1206-1214.
[2] EPSHTEIN B, OFEK E, WEXLER Y. Detecting text in natural scenes with stroke width transform[C]// 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2010:2963-2970.
[3] NEUMANN L, MATAS J. Real-time scene text localization and recognition[C]// 2012 IEEE Conference on Computer Vision and Pattern Recognition. 2012:3538-3545.
[4] TIAN S X, PAN Y F, HUANG C, et al. Text flow: A unified text detection system in natural scene images[C]// Proceedings of the IEEE International Conference on Computer Vision. 2015:4651-4659.
[5] LIAO M H, SHI B G, BAI X. Textboxes+〖KG-*3〗+: A single-shot oriented scene text detector[J]. IEEE Transactions on Image Processing, 2018,27(8):3676-3690.
[6] LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single shot multibox detector[C]// European Conference on Computer Vision. 2016:21-37.
[7] LIU Y L, JIN L W. Deep matching prior network: Toward tighter multi-oriented text detection [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017:3454-3461.
[8] BUSTA M, NEUMANN L, MATAS J. Deep textspotter: An end-to-end trainable scene text localization and recognition framework[C]// Proceedings of the IEEE International Conference on Computer Vision. 2017:2223-2231.
[9] REDMON J, FARHADI A. YOLO9000: Better, faster, stronger[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017:6517-6525.
[10]WANG J Q, CHEN K, YANG S, et al. Region proposal by guided anchoring[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019:2960-2969.
[11]LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]// Proceedings of the IEEE International Conference on Computer Vision. 2017:2999-3007.
[12]SERMANET P, EIGEN D, ZHANG X, et al. Overfeat: Integrated recognition, localization and detection using convolutional networks[J]. arXiv:1312.6229, 2013.
[13]KONG T, SUN F C, LIU H P, et al. Consistent optimization for single-shot object detection[J]. arXiv:1901.06563, 2019.
[14]GUPTA A, VEDALDI A, ZISSERMAN A. Synthetic data for text localisation in natural images[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:2315-2324.
[15]ZHOU X Y, YAO C, WEN H, et al. East: An efficient and accurate scene text detector[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017:2642-2651.
[16]REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[C]// Proceedings of the 28th International Conference on Neural Information Processing Systems. 2015:91-99.
[17]SHI B, BAI X, BELONGIE S. Detecting oriented text in natural images by linking segments[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017:3482-3490.
[18]CHEN Y P, FAN H Q, XU B, et al. Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution[C]// Proceedings of the IEEE International Conference on Computer Vision. 2019:3434-3443.
[19]TIAN Z, HUANG W L, HE T, et al. Detecting text in natural image with connectionist text proposal network[C]// European Conference on Computer Vision. 2016:56-72.
[20]CHEN J, WU Q, LIU D, et al. Foreground-background imbalance problem in deep object detectors: A review[C]// IEEE Conference on Multimedia Information Processing and Retrieval. 2020:285-29.
[21]XIE E, ZANG Y H, SHAO S, et al. Scene text detection with supervised pyramid context network[C]// Proceedings of the AAAI Conference on Artificial Intelligence. 2019: 9038-9045.
[22]ZHONG Z Y, JIN L W, ZHANG S P, et al. Deeptext: A unified framework for text proposal generation and text detection in natural images[C]// 2017 IEEE International Conference on Acoustics, Speech and Signal Processing. 2017:1208-1212.
[23]REDMON J, FARHADI A. YOLOv3: An Incremental Improvement[J]. arXiv: 1804.02767, 2018.
|