[1] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013.
[2] KINGMA D, BA J. Adam: A method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.
[3] PAPERNOT N, MCDANIEL P, GOODFELLOW I. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples[J]. arXiv preprint arXiv:1605.07277, 2016.
[4] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[J]. arXiv preprint arXiv:1412.6572, 2014.
[5] HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[J]. arXiv preprint arXiv:1503.02531, 2015.
[6] PAPERNOT N, MCDANIEL P, WU X, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// 2016 IEEE Symposium on Security and Privacy (SP). 2016:582-597.
[7] 杨浚宇. 基于迭代自编码器的深度学习对抗样本防御方案[J]. 信息安全学报, 2019,4(6):34-44.
[8] GONG Z T, WANG W L, KU W S. Adversarial and clean data are not twins[J]. arXiv preprint arXiv:1704.04960, 2017.
[9] HENDRYCKS D, GIMPEL K. Early methods for detecting adversarial images[J]. arXiv preprint arXiv:1608.00530, 2016.
[10]WEI W Q, LIU L, LOPER M, et al. Cross-layer strategic ensemble defense against adversarial examples[C]// 2020 International Conference on Computing, Networking and Communications (ICNC). 2020:456-460.
[11]CHOW K H, WEI W Q, WU Y Z, et al. Denoising and verification cross-layer ensemble against black-box adversarial attacks[C]// 2019 IEEE International Conference on Big Data. 2019:1282-1291.
[12]SAMANGOUEI P, KABKAB M, CHELLAPPA R. Defense-GAN: Protecting classifiers against adversarial attacks using generative models[J]. arXiv preprint arXiv:1805.06605, 2018.
[13]TRAMR F, KURAKIN A, PAPERNOT N, et al. Ensemble adversarial training: Attacks and defenses[J]. arXiv preprint arXiv:1705.07204, 2017.
[14]TANAY T, GRIFFIN L. A boundary tilting persepective on the phenomenon of adversarial examples[J]. arXiv preprint arXiv:1608.07690, 2016.
[15]GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014:2672-2680.
[16]MIRZA M, OSINDERO S. Conditional generative adversarial nets[J]. arXiv preprint arXiv:1411.1784, 2014.
[17]LIU L, WEI W Q, CHOW K H, et al. Deep neural network ensembles against deception: Ensemble diversity, accuracy and robustness[C]// Proceedings of the 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). 2019:274-282.
[18]JANDIAL S, MANGLA P, VARSHNEY S, et al. AdvGAN++: Harnessing latent layers for adversary generation[C]// 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). 2019:2045-2048.
[19]CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]// 2017 IEEE Symposium on Security and Privacy. 2017:39-57.
[20]TAORI R, KAMSETTY A, CHU B, et al. Targeted adversarial examples for black box audio systems[C]// 2019 IEEE Security and Privacy Workshops. 2019:15-20.
[21]PAPERNOT N, MCDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning[C]// Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. 2017:506-519.
[22]SZELISKI R. Computer Vision: Algorithms and Applications[M]. Springer Science & Business Media, 2010.
[23]LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998,86(11):2278-2324.
|