[1] HUANG H X, MA X J, ERFANI S M, et al. Unlearnable examples: Making personal data unexploitable[J]. arXiv preprint arXiv:2101.04898, 2021.
[2] 陈健. 机器学习中的投毒攻击及防御关键技术研究[D]. 武汉:华中科技大学, 2023.
[3] KUMAR R S S, NYSTRM M, LAMBERT J, et al. Adversarial machine learning-industry perspectives[C]// 2020 IEEE Security and Privacy Workshops(SPW). IEEE, 2020:69-75.
[4] ROSENFELD E, WINSTON E, RAVIKUMAR P, et al. Certified robustness to label-flipping attacks via randomized smoothing[C]// International Conference on Machine Learning. PMLR, 2020:8230-8241.
[5] WANG Z T, ZHAI J, MA S Q. Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2022:15074-15084.
[6] CHENG S Y, LIU Y Q, MA S Q, et al. Deep feature space trojan attack of neural networks by controlled detoxification[C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. AAAI, 2021:1148-1156.
[7] GEIPING J, FOWL L, HUANG W R, et al.Witches’ brew: Industrial scale data poisoning via gradient matching[J]. arXiv preprint arXiv:2009.02276, 2020.
[8] DOAN K, LAO Y J, ZHAO W J, et al. LIRA: Learnable, imperceptible and robust backdoor attacks[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. IEEE, 2021:11966-11976.
[9] LI Y M, ZHAI T Q, WU B Y, et al. Rethinking the trigger of backdoor attack[J]. arXiv preprint arXiv:2004.04692, 2020.
[10] YU Y, LIU Q, WU L K, et al. Untargeted attack against federated recommendation systems via poisonous item embeddings and the defense[C]// Proceedings of the 37th AAAI Conference on Artificial Intelligence. AAAI, 2023:4854-4863.
[11] TOLPEGIN V, TRUEX S, GURSOY M E, et al. Data poisoning attacks against federated learning systems[C]// 25th European Symposium on Research in Computer Security. Springer, 2020:480-501.
[12] BARUCH M, BARUCH G, GOLDBERG Y. A little is enough: Circumventing defenses for distributed learning[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. ACM, 2019:8635-8645.
[13] FANG M H, CAO X Y, JIA J Y, et al. Local model poisoning attacks to Byzantine-robust federated learning[C]// Proceedings of the 29th USENIX Conference on Security Symposium. USENIX, 2020:1605-1622.
[14] HE H, ZHA K W, KATABI D. Indiscriminate poisoning attacks on unsupervised contrastive learning[J]. arXiv preprint arXiv:2202.11202, 2022.
[15] HUANG J Y, ZHAO Z L, CHEN L Y, et al. Fabricated flips: Poisoning federated learning without data[C]// 2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks(DSN). IEEE, 2023:274-287.
[16] 刘高扬,吴伟玲,张锦升,等. 多模态对比学习中的靶向投毒攻击[J]. 信息网络安全, 2023,23(11):69-83.
[17] WANG W N, MOU X Q, LIU X B. Modified eigenvector-based feature extraction for hyperspectral image classification using limited samples[J]. Signal Image and Video Processing, 2020,14(1):711-717.
[18] ZHANG C N, BENZ P, IMTIAZ T, et al. Understanding adversarial examples from the mutual influence of images and perturbations[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2020:14521-14530.
[19] KRIZHEVSKY A. Learning Multiple Layers of Features from Tiny Images[D]. University of Toronto, 2009.
[20] RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015,115(3):211-252.
[21] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2016:770-778.
[22] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
[23] SZEGEDY C, IOFFE S, VANHOUCKE V, et al. Inception-v4, Inception-ResNet and the impact of residual connections on learning[C]// Proceeding of the 31st AAAI Conference on Artificial Intelligence, and 29th Innovative Applications of Artificial Intelligence Conference and 7th Symposium on Educational Advances in Artificial Intelligence. AAAI, 2017:4278-4284.
[24] HUANG G, LIU Z, LAURENS V D M, et al. Densely connected convolutional networks[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2017:4700-4708.
[25] SHEN J C, ZHU X L, MA D. TensorClog: An imperceptible poisoning attack on deep neural network applications[J]. IEEE Access, 2019,7:41498-41506.
[26] FU S P, HE F X, LIU Y, et al. Robust unlearnable examples: Protecting data privacy against adversarial learning[J]. arXiv preprint arXiv:2203.14533, 2022.
[27] YU D, ZHANG H S, CHEN W, et al. Availability attacks create shortcuts[C]// Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 2022:2367-2376.
[28] 陈谦,柴政,王子龙,等. 基于生成对抗网络的联邦学习中投毒攻击检测方案[J]. 计算机应用, 2023,43(12):3790-3798.
[29] ZHANG C, TANG Z, LI K L. Clean-label poisoning attack with perturbation causing dominant features[J]. Information Sciences, 2023, 644(C). DOI:10.1016/j.ins.2023.03.124.
[30] GUPTA P, YADAV K, GUPTA B B, et al. A novel data poisoning attack in federated learning based on inverted loss function[J]. Computers & Security, 2023,130. DOI: 10.
1016/j.cose.2023.103270.
[31] SANDOVAL-SEGURA P, SINGLA V, GEIPING J, et al. Autoregressive perturbations for data poisoning[J]. arXiv preprint arXiv:2206.03693, 2022.
[32] ZHANG X L, ZHANG H L, ZHANG G M, et al. Model poisoning attack on neural network without reference data[J]. IEEE Transactions on Computers, 2023,72(10):2978-2989.