[1] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:770-778.
[2] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:779-788.
[3] ZENG M, WANG Y S, LUO Y. Dirichlet latent variable hierarchical recurrent encoder-decoder in dialogue generation[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 2019:1267-1272.
[4] WEN Y D, ZHANG K P, LI Z F, et al. A discriminative feature learning approach for deep face recognition[C]// European Conference on Computer Vision. Springer, 2016:499-515.
[5] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[J]. arXiv preprint arXiv:1412.6572, 2014.
[6] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013.
[7] WANG W Q, WANG L N, WANG R, et al. Towards a robust deep neural network in texts: A survey[J]. arXiv preprint arXiv:1902.07285, 2019.
[8] WEI Z P, CHEN J L, WEI X, et al. Heuristic black-box adversarial attacks on video recognition models[C]// Proceedings of the AAAI Conference on Artificial Intelligence. 2020,34(7):12338-12345.
[9] XIE C H, WANG J Y, ZHANG Z S, et al. Adversarial examples for semantic segmentation and object detection[C]// Proceedings of the IEEE International Conference on Computer Vision. 2017:1369-1378.
[10]PAPERNOT N, MCDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning[C]// Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. 2017:506-519.
[11]CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]// 2017 IEEE Symposium on Security and Privacy. IEEE, 2017:39-57.
[12]DONG Y P, LIAO F Z, PANG T Y, et al. Boosting adversarial attacks with momentum[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018:9185-9193.
[13]ALZANTOT M, SHARMA Y, CHAKRABORTY S, et al. Genattack: Practical black-box attacks with gradient-free optimization[C]// Proceedings of the Genetic and Evolutionary Computation Conference. 2019:1111-1119.
[14]SUYA F, CHI J, EVANS D, et al. Hybrid batch attacks: Finding black-box adversarial examples with limited queries[C]// 29th USENIX Security Symposium (USENIX Security 20). 2020:1327-1344.
[15]ILYAS A, ENGSTROM L, ATHALYE A, et al. Black-box adversarial attacks with limited queries and information[C]// International Conference on Machine Learning. 2018:2137-2146.
[16]ILYAS A, ENGSTROM L, MADRY A. Prior convictions: Black-box adversarial attacks with bandits and priors[J]. arXiv preprint arXiv:1807.07978, 2018.
[17]CHEN P Y, ZHANG H, SHARMA Y, et al. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]// Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 2017:15-26.
[18]SU J W, VARGAS D V, SAKURAI K. One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2019,23(5):828-841.
[19]DUAN R J, MA X J, WANG Y S, et al. Adversarial camouflage: Hiding physical-world attacks with natural styles[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020:1000-1008.
[20]SHAMSABADI A S, SANCHEZ-MATILLA R, CAVALLARO A. Colorfool: Semantic adversarial colorization[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020:1151-1160.
[21]DONG Y P, PANG T Y, SU H, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019:4312-4321.
[22]MODAS A, MOOSAVI-DEZFOOLI S M, FROSSARD P. Sparsefool: A few pixels make a big difference[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019:9087-9096.
[23]PAPERNOT N, MCDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]// 2016 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2016:372-387.
[24]MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. Deepfool: A simple and accurate method to fool deep neural networks[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:2574-2582.
[25]WEI X X, GUO Y, LI B. Black-box adversarial attacks by manipulating image attributes[J]. Information Sciences, 2021,550:285-296.
[26]BHATTAD A, CHONG M J, LIANG K, et al. Big but imperceptible adversarial perturbations via semantic manipulation[J]. arXiv preprint arXiv:1904.06347, 2019.
[27]HOSSEINI H, POOVENDRAN R. Semantic adversarial examples[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018:1614-1619.
[28]KURAKIN A, GOODFELLOW I J, BENGIO S. Adversarial examples in the physical world[M]// Artificial Intelligence Safety and Security. Chapman and Hall/CRC, 2018:99-112.
[29]XIAO C W, LI B, ZHU J Y, et al. Generating adversarial examples with adversarial networks[J]. arXiv preprint arXiv:1801.02610, 2018.
[30]EYKHOLT K, EVTIMOV I, FERNANDES E, et al. Robust physical-world attacks on deep learning visual classification[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018:1625-1634.
[31]BROWN T B, MAN D, ROY A, et al. Adversarial patch[J]. arXiv preprint arXiv:1712.09665, 2017.
[32]NILSBACK M E, ZISSERMAN A. Automated flower classification over a large number of classes[C]// 2008 6th Indian Conference on Computer Vision. IEEE, 2008:722-729.
[33]蒋凌云. 基于生成对抗网络的图像对抗样本攻防算法研究[D]. 郑州:战略支援部队信息工程大学, 2019.
[34]MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[J]. arXiv preprint arXiv:1706.06083, 2017.
[35]WU D X, WANG Y S, XIA S T, et al. Skip connections matter: On the transferability of adversarial examples generated with resnets[J]. arXiv preprint arXiv:2002.05990, 2020.
[36]LAIDLAW C, FEIZI S. Functional adversarial attacks[J]. arXiv preprint arXiv:1906.00001, 2019.
[37]SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
[38]STORN R, PRICE K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces[J]. Journal of global optimization, 1997,11(4):341-359.
[39]SONG Y, SHU R, KUSHMAN N, et al. Constructing unrestricted adversarial examples with generative models[C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018:8322-8333.
[40]SHARIF M, BHAGAVATULA S, BAUER L, et al. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition[C]// Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 2016:1528-1540.
[41]XU K D, ZHANG G Y, LIU S J, et al. Adversarial T-shirt! Evading person detectors in a physical world[C]// European Conference on Computer Vision. Springer, 2020:665-681.
[42]TSAFTARIS S A, CASADIO F, ANDRAL J L, et al. A novel visualization tool for art history and conservation: Automated colorization of black and white archival photographs of works of art[J]. Studies in Conservation, 2014,59(3):125-135.
[43]QU Y, WONG T T, HENG P A. Manga colorization[J]. ACM Transactions on Graphics (TOG), 2006,25(3):1214-1220.
[44]MARKLE W, HUNT B. Coloring a Black and White Signal Using Motion Detection: U.S. Patent 4,755,870[P]. 1988-7-5.
[45]CHENG Z Z, YANG Q X, SHENG B. Deep colorization[C]// Proceedings of the IEEE International Conference on Computer Vision. 2015:415-423.
[46]CAO Y, ZHOU Z M, ZHANG W N, et al. Unsupervised diverse colorization via generative adversarial networks[C]// Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2017:151-166.
[47]ZHANG R, ISOLA P, EFROS A A. Colorful image colorization[C]// European Conference on Computer Vision. Springer, 2016:649-666.
[48]IIZUKA S, SIMO-SERRA E, ISHIKAWA H. Let there be color! Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification[J]. ACM Transactions on Graphics (ToG), 2016,35(4):1-11.
[49]DESHPANDE A, LU J, YEH M C, et al. Learning diverse image colorization[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017:6837-6845.
[50]王鹤树. 基于全卷积神经网络的植物叶片分割及表型解析研究[D]. 长春:吉林农业大学, 2020.
[51]张本健. 基于深度学习的放疗脑肿瘤靶区自动勾画方法研究[D]. 合肥:合肥工业大学, 2020.
[52]LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015:3431-3440.
[53]GASTAL E S L, OLIVEIRA M M. Domain transform for edge-aware image and video processing[J]. ACM Transactions on Graphics, 2011,30(4):1-12.
[54]KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Imagenet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017,60(6):84-90.
[55]IANDOLA F N, HAN S, MOSKEWICZ M W, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size[J]. arXiv preprint arXiv:1602.07360, 2016.
|