[1] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014:580-587.
[2] GIRSHICK R. Fast R-CNN[C]// Proceedings of the IEEE International Conference on Computer Vision. 2015:1440-1448.
[3] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017,39(6):1137-1149..
[4] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:779-788.
[5] REDMON J, FARHADI A. YOLO9000: Better, faster, stronger[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017:7263-7271.
[6] REDMON J, FARHADI A. YOLOv3: An incremental improvement[J]. arXiv preprint arXiv:1804.02767, 2018.
[7] BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: Optimal speed and accuracy of object detection[J]. arXiv preprint arXiv:2004.10934, 2020.
[8] WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[J]. arXiv preprint arXiv:2207.02696, 2022.
[9] CHANG Y L, ANAGAW A, CHANG L, et al. Ship detection based on YOLOv2 for SAR imagery[J]. Remote Sensing, 2019,11(7). DOI: 10.3390/rs11070786.
[10] 于洋,李世杰,陈亮,等. 基于改进 YOLO v2 的船舶目标检测方法[J]. 计算机科学, 2019,46(8):332-336.
[11] CHEN L Q, SHI W X, DENG D X. Improved YOLOv3 based on attention mechanism for fast and accurate ship detection in optical remote sensing images[J]. Remote Sensing, 2021,13(4). DOI: 10.3390/rs13040660.
[12] 公明,刘妍妍,李国宁. 改进 YOLO-v3 的遥感图像舰船检测方法[J]. 电光与控制, 2020,27(5):102-107.
[13] HE K M, ZHANG X Y, REN S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015,37(9):1904-1916.
[14] SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:2818-2826.
[15] JIANG J H, FU X J, QIN R, et al. High-speed lightweight ship detection algorithm based on YOLO-V4 for three-channels RGB SAR image[J]. Remote Sensing, 2021,13(10). DOI: 10.3390/rs13101909.
[16] ZHENG J C, SUN S D, ZHAO S J. Fast ship detection based on lightweight YOLOv5 network[J]. IET Image Processing, 2022,16(6):1585-1593.
[17] 谭显东,彭辉. 改进 YOLOv5 的 SAR 图像舰船目标检测[J]. 计算机工程与应用, 2022,58(4):247-254.
[18] HOU Q B, ZHOU D Q, FENG J S. Coordinate attention for efficient mobile network design[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021:13713-13722.
[19] LI S, FU X, DONG J. Improved ship detection algorithm based on YOLOX for SAR outline enhancement image[J]. Remote Sensing, 2022,14(16).DOI: 10.3390/rs14164070.
[20] GE Z, LIU S T, WANG F, et al. YOLOx: Exceeding YOLO series in 2021[J]. arXiv preprint arXiv:2107.08430, 2021.
[21] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context[C]// Proceedings of the 13th European Conference on Computer Vision(ECCV 2014). 2014:740-755.
[22] ARTHUR D, VASSILVITSKII S. K-means++ the advantages of careful seeding[C]// Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms. 2007:1027-1035.
[23] ZHENG Z H, WANG P, LIU W, et al. Distance-IoU loss: Faster and better learning for bounding box regression[C]// Proceedings of the AAAI Conference on Artificial Intelligence. 2020,34(7):12993-13000.
[24] ZHANG Y F, REN W, ZHANG Z, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. Neurocomputing, 2022,506:146-157.
[25] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]// Proceedings of the IEEE International Conference on Computer Vision. 2017:2980-2988.
[26] YU J H, JIANG Y N, WANG Z Y, et al. Unitbox: An advanced object detection network[C]// Proceedings of the 24th ACM International Conference on Multimedia. 2016:516-520.
[27] REZATOFIGHI H, TSOI N, GWAK J Y, et al. Generalized intersection over union: A metric and a loss for bounding box regression[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019:658-666.
[28] SUNKARA R, LUO T. No more strided convolutions or pooling: a new CNN building block for low-resolution images and small objects[C]// European Conference on Machine Learning and Knowledge Discovery in Databases(ECML PKDD 2022). Springer, 2023:443-459.
[29] DI Y H, JIANG Z G, ZHANG H P. A public dataset for fine-grained ship classification in optical remote sensing images[J]. Remote Sensing, 2021,13(4). DOI: 10.3390/rs13040747.
[30] CHEN J Q, CHEN K Y, CHEN H, et al. Contrastive learning for fine-grained ship classification in remote sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022,60. DOI: 10.1109/TGRS.2022.3192256.
[31] XIONG W, XIONG Z Y, CUI Y Q. An explainable attention network for fine-grained ship classification using remote-sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022,60. DOI:10.1109/TGRS.2022.3162195.
[32] XIAO Q, LIU B, LI Z Y, et al. Progressive data augmentation method for remote sensing ship image classification based on imaging simulation system and neural style transfer[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021,14:9176-9186.