计算机与现代化 ›› 2023, Vol. 0 ›› Issue (01): 74-80.

• 算法设计与分析 • 上一篇    下一篇

面向图卷积神经网络鲁棒防御方法

  

  1. (杭州师范大学阿里巴巴商学院,浙江 杭州 311121)
  • 出版日期:2023-03-02 发布日期:2023-03-02
  • 作者简介:钱晓钊(1999—),男,浙江温州人,硕士研究生,研究方向:图与风控,E-mail: 956749015@qq.com; 通信作者:王澎(1981—),男,湖南岳阳人,硕士生导师,博士,研究方向:在线人类行为动力学机制,虚假交易特征识别,E-mail: wangpeng_621@163.com。
  • 基金资助:
    国家自然科学基金资助项目(61304150); 浙江省自然科学基金资助项目(LQ13F030015)

Robust Defense Method for Graph Convolutional Neural Network

  1. (College of Alibaba Business, Hangzhou Normal University, Hangzhou 311121, China)
  • Online:2023-03-02 Published:2023-03-02

摘要: 近来对图卷积神经网络(GCNs)的研究及其应用日益成熟,虽然它的性能已经达到很高的水准,但GCNs在受到对抗攻击时模型鲁棒性较差。现有的防御方法大都基于启发式经验算法,没有考虑GCNs结构脆弱的原因。最近,已有研究表明GCNs脆弱的原因是非鲁棒的聚合函数。本文从崩溃点和影响函数抗差性角度出发,分析平尾均值函数和均值聚合函数二者的鲁棒性。平尾均值相较于均值函数,其崩溃点更高。平尾均值的影响函数跳跃有界,可抵抗异常值;而均值函数的影响函数无界,对异常值十分敏感。随后在GCNs框架的基础上,通过将图卷积算子中的聚合函数更换为更为鲁棒的平尾均值,提出一种改进的鲁棒防御方法WinsorisedGCN。最后采用Nettack对抗攻击方法研究分析所提出的模型在不同扰动代价下的鲁棒性,通过准确率和分类裕度评价指标对模型性能进行评估。实验结果表明,所提出的防御方案相较于其他基准模型,能够在保证模型准确率的前提下,有效提高模型在对抗攻击下的鲁棒性。

关键词: 图卷积神经网络, 图对抗训练, 鲁棒性

Abstract: Recently, graph convolutional neural networks (GCNs) have been increasingly mature in research and application. Although its performance has reached a high level, but GCNs have poor model robustness when subjected to adversarial attacks. Most of the existing defense methods are based on heuristic empirical algorithms and do not consider the reasons of the structural vulnerability of GCNs. Nowadays, researches have shown that GCNs are vulnerable due to non-robust aggregation functions. This paper analyzes the robustness of the winsorised mean function and the mean aggregation function in terms of the breakdown point and the impact function resistance. The winsorised mean has a higher breakdown point compared to the mean function. The influence function of the winsorised mean is bounded in jumps and can resistant to outliers, while the influence function of the mean function is unbounded and very sensitive to outliers. An improved robust defense method, WinsorisedGCN, is then proposed based on the GCNs framework by replacing the aggregation function in the graph convolution operator with a more robust winsorised mean. Finally, this paper uses the Nettack counter-attack method to study and analyze the robustness of the proposed model under different perturbation budgets, and the model performance is evaluated by accuracy and classification margin evaluation metrics. The experimental results demonstrate that the proposed defense scheme can effectively improve the robustness of the model under adversarial attacks while ensuring the model accuracy compared to other benchmark models.

Key words: graph convolutional neural networks, graph adversarial training, robustness