Computer and Modernization ›› 2023, Vol. 0 ›› Issue (01): 74-80.

Previous Articles     Next Articles

Robust Defense Method for Graph Convolutional Neural Network

  

  1. (College of Alibaba Business, Hangzhou Normal University, Hangzhou 311121, China)
  • Online:2023-03-02 Published:2023-03-02

Abstract: Recently, graph convolutional neural networks (GCNs) have been increasingly mature in research and application. Although its performance has reached a high level, but GCNs have poor model robustness when subjected to adversarial attacks. Most of the existing defense methods are based on heuristic empirical algorithms and do not consider the reasons of the structural vulnerability of GCNs. Nowadays, researches have shown that GCNs are vulnerable due to non-robust aggregation functions. This paper analyzes the robustness of the winsorised mean function and the mean aggregation function in terms of the breakdown point and the impact function resistance. The winsorised mean has a higher breakdown point compared to the mean function. The influence function of the winsorised mean is bounded in jumps and can resistant to outliers, while the influence function of the mean function is unbounded and very sensitive to outliers. An improved robust defense method, WinsorisedGCN, is then proposed based on the GCNs framework by replacing the aggregation function in the graph convolution operator with a more robust winsorised mean. Finally, this paper uses the Nettack counter-attack method to study and analyze the robustness of the proposed model under different perturbation budgets, and the model performance is evaluated by accuracy and classification margin evaluation metrics. The experimental results demonstrate that the proposed defense scheme can effectively improve the robustness of the model under adversarial attacks while ensuring the model accuracy compared to other benchmark models.

Key words: graph convolutional neural networks, graph adversarial training, robustness