计算机与现代化 ›› 2023, Vol. 0 ›› Issue (10): 70-76.doi: 10.3969/j.issn.1006-2475.2023.10.011

• 图像处理 • 上一篇    下一篇

一种提高图像识别模型鲁棒性的弱化强化方法

  

  1. (华南师范大学计算机学院,广东 广州 510631)
  • 出版日期:2023-10-26 发布日期:2023-10-26
  • 作者简介:黎世达(1997—),男,广东平远人,硕士研究生,研究方向:计算机视觉,E-mail: lsaitta@163.com; 项剑文(1998—),男,江西上饶人,硕士研究生,研究方向:计算机视觉,E-mail: xiangjianwen0813@163.com。

A Weakened Joint Reinforcement Method to Improve Robustness of Image Recognition Models

  1. (School of Computer Science, South China Normal University, Guangzhou 510631, China)
  • Online:2023-10-26 Published:2023-10-26

摘要: 如何增强模型在对抗样本攻击下的鲁棒性是一个重要研究方向。本文提出一种提高图像识别模型鲁棒性的方法。该方法由弱化操作和强化操作组成。弱化操作弱化原生输入的像素值,破坏对抗扰动的结构,这个过程减少了图片中的对抗扰动但也丢失了一些空间语义信息,这部分丢失的语义信息由强化操作补全。强化操作由特征提取器和特征选择器构成,特征提取器用于提取合适的图像特征图。为了从这些特征图中选择鲁棒的部分,设计一个特征选择器用于融合特征图的内容并输出扰动较小且富含空间语义信息的特征图。通过大量的对比实验证实了方法抵御对抗样本的有效性并揭示了对抗扰动的误差积累现象。

关键词: 关键词:深度学习, 神经网络, 对抗样本, 图像识别

Abstract:  How to enhance the robustness of the model against adversarial examples attacks is an important research direction. In this paper, a method to improve the robustness of image recognition models is proposed. The method consists of a weakening operation and a strengthening operation. The weakening operation weakens the pixel values of the native input and destroys the structure of the adversarial perturbation. This process reduces the adversarial perturbation in the image but also loses some spatial semantic information, and this lost semantic information is supplemented by the reinforcement operation. The reinforcement operation consists of a feature extractor and a feature selector. The feature extractor is used to extract suitable image feature maps, and in order to select robust parts from these feature maps, a feature selector is designed to fuse the content of the feature maps and output feature maps with less perturbation and rich spatial semantic information. In this paper, the effectiveness of the method against adversarial examples is confirmed by extensive comparison experiments and the error accumulation phenomenon of adversarial perturbation is revealed.

Key words: Key words: deep learning, neural networks, adversarial example, image recognition

中图分类号: