计算机与现代化 ›› 2021, Vol. 0 ›› Issue (07): 65-70.

• 人工智能 • 上一篇    下一篇

基于条件对抗生成网络的对抗样本防御方法

  

  1. (1.中国石油大学(华东)海洋与空间信息学院,山东青岛266580;
    2.中国石油大学(华东)计算机科学与技术学院,山东青岛266580)

  • 出版日期:2021-08-02 发布日期:2021-08-02
  • 作者简介:李世宝(1978—),男,山东潍坊人,副教授,硕士生导师,硕士,研究方向:移动计算,干扰对齐,E-mail: lishibao@upc.edu.cn; 曹大鹏(1996—),男,山东潍坊人,硕士研究生,研究方向:对抗样本,E-mail: 329042713@qq.com; 刘建航(1978—),男,辽宁锦州人,副教授,硕士生导师,博士,研究方向:车联网,无线传感器网络,E-mail: liujianhang@upc.edu.cn。
  • 基金资助:
    国家自然科学基金资助项目(61972417); 中央高校基本科研业务费专项资金资助项目(18CX02134A, 19CX05003A-4, 18CX02137A); 山东省研究生导师指导能力提升项目(SDYY18025)

CGAN-based Adversarial Example Defense Method

  1. (1. College of Oceanography and Space Informatics, China University of Petroleum(East China), Qingdao 266580, China;
    2. College of Computer Science and Technology, China University of Petroleum(East China), Qingdao 266580, China)
  • Online:2021-08-02 Published:2021-08-02

摘要: 人工智能目前在诸多领域均得到较好应用,然而通过对抗样本会使神经网络模型输出错误的分类。研究提升神经网络模型鲁棒性的同时如何兼顾算法运行效率,对于深度学习在现实中的落地使用意义重大。针对上述问题,本文提出一种基于条件对抗生成网络的对抗样本防御方法Defense-CGAN。首先使用对抗生成网络生成器根据输入噪声与标签信息生成重构图像,然后计算重构前后图像均方误差,对比选取重构图像馈送到分类器进行分类从而去除对抗性扰动,实现对抗样本防御,最后,在MNIST数据集上进行大量实验。实验结果表明本文提出的防御方法更加具备通用性,能够防御多种对抗攻击,且时间消耗低,可应用于对时间要求极其苛刻的实际场景中。

关键词: 对抗样本, 神经网络, 对抗样本防御, 条件对抗生成网络, 深度学习

Abstract: Artificial Intelligence has been well applied in many fields at present. However, the classification of neural network model output errors can be achieved by adversarial example. It is of great significance to study how to improve the robustness of the neural network model while taking into account the efficiency of the algorithm operation. To solve the above problems, this paper proposes a defense method Defense-CGAN based on conditional countermeasure generation network. Firstly, the generator of CGAN is used to generate the reconstructed image according to the input noise and label information, and then the MSE is used to extract the image features before and after the reconstruction. The reconstructed image is selected and fed to the classifier for classification, so as to remove the antagonistic perturbation and realize defense of the adversarial example. Finally, a large number of experiments are carried out on the MNIST data set. The experimental results show that the proposed defense method is more versatile, can defend against various kinds of adversarial attacks, and the time consumption is low at the same time. Therefore, this method can be applied to the real scene with extremely strict time requirement.

Key words: adversarial examples, neural network, adversarial example defense, conditional generative adversarial network, deep learning