Skip to main content

Arms Race in Adversarial Graph Learning

Project description

⚔🛡 GraphWar: Arms Race in Graph Adversarial Attack and Defense

banner

Graph War is Cooooooooming!!!!!

[Documentation] | [Examples]

Python pytorch license Contrib

Know thy self, know thy enemy. A thousand battles, a thousand victories.

「知己知彼,百战百胜」 ——《孙子兵法•谋攻篇》

💨 News

  • May 27, 2022: GraphWar has been refactored with PyTorch Geometric (PyG), old code based on DGL can be found here. We will soon release the first version of GraphWar, stay tuned!

NOTE: GraphWar is still in the early stages and the API will likely continue to change. If you are interested in this project, don't hesitate to contact me or make a PR directly.

🚀 Installation

Please make sure you have installed PyTorch and PyTorch Geometric (PyG).

# Coming soon
pip install -U graphwar

or

# Recommended
git clone https://github.com/EdisonLeeeee/GraphWar.git && cd GraphWar
pip install -e . --verbose

where -e means "editable" mode so you don't have to reinstall every time you make changes.

⚡ Get Started

Assume that you have a torch_geometric.data.Data instance data that describes your graph.

How fast can we train and evaluate your own GNN?

Take GCN as an example:

from graphwar.nn.models import GCN
from graphwar.training import Trainer
from torch_geometric.datasets import Planetoid
dataset = Planetoid(root='.', name='Cora') # Any PyG dataset is available!
data = dataset[0]
model = GCN(dataset.num_features, dataset.num_classes)
trainer = Trainer(model, device='cuda:0')
trainer.fit({'data': data, 'mask': data.train_mask})
trainer.evaluate({'data': data, 'mask': data.test_mask})

A simple targeted manipulation attack

from graphwar.attack.targeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges 
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()

A simple untargeted (non-targeted) manipulation attack

from graphwar.attack.untargeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()

👀 Implementations

In detail, the following methods are currently implemented:

⚔ Attack

Graph Manipulation Attack (GMA)

Targeted Attack

Methods Descriptions Examples
RandomAttack A simple random method that chooses edges to flip randomly. [Example]
DICEAttack Waniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16 [Example]
Nettack Zügner et al. Adversarial Attacks on Neural Networks for Graph Data, KDD'18 [Example]
FGAttack Chen et al. Fast Gradient Attack on Network Embedding, arXiv'18 [Example]
GFAttack Chang et al. A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20 [Example]
IGAttack Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 [Example]
SGAttack Li et al. Adversarial Attack on Large Scale Graph, TKDE'21 [Example]

Untargeted Attack

Methods Descriptions Examples
RandomAttack A simple random method that chooses edges to flip randomly [Example]
DICEAttack Waniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16 [Example]
FGAttack Chen et al. Fast Gradient Attack on Network Embedding, arXiv'18 [Example]
Metattack Zügner et al. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19 [Example]
IGAttack Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 [Example]
PGD Xu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19 [Example]
MinmaxAttack Xu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19 [Example]

Graph Injection Attack (GIA)

Methods Descriptions Examples
RandomInjection A simple random method that chooses nodes to inject randomly. [Example]
AdvInjection The 2nd place solution of KDD Cup 2020, team: ADVERSARIES. [Example]

Graph Universal Attack (GUA)

Graph Backdoor Attack (GBA)

Methods Descriptions Examples
LGCBackdoor Chen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22 [Example]
FGBackdoor Chen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22 [Example]

🛡 Defense

Standard GNNs (without defense)

Methods Descriptions Examples
GCN Kipf et al. Semi-Supervised Classification with Graph Convolutional Networks, ICLR'17 [Example]
SGC Wu et al. Simplifying Graph Convolutional Networks, ICLR'19 [Example]
GAT Veličković et al. Graph Attention Networks, ICLR'18 [Example]
DAGNN Liu et al. Towards Deeper Graph Neural Networks, KDD'20 [Example]
APPNP Klicpera et al. Predict then Propagate: Graph Neural Networks meet Personalized PageRank, ICLR'19 [Example]
JKNet Xu et al. Representation Learning on Graphs with Jumping Knowledge Networks, ICML'18 [Example]
TAGCN Du et al. Topological Adaptive Graph Convolutional Networks, arXiv'17 [Example]
SSGC ZHu et al. Simple Spectral Graph Convolution, ICLR'21 [Example]

Robust GNNs

Methods Descriptions Examples
MedianGCN Chen et al. Understanding Structural Vulnerability in Graph Convolutional Networks, IJCAI'21 [Example]
RobustGCN Zhu et al. Robust Graph Convolutional Networks Against Adversarial Attacks, KDD'19 [Example]
SoftMedianGCN Geisler et al. Reliable Graph Neural Networks via Robust Aggregation, NeurIPS'20
Geisler et al. Robustness of Graph Neural Networks at Scale, NeurIPS'21
[Example]
ElasticGNN Liu et al. Elastic Graph Neural Networks, ICML'21 [Example]
AirGNN Liu et al. Graph Neural Networks with Adaptive Residual, NeurIPS'21 [Example]
SimPGCN Jin et al. Node Similarity Preserving Graph Convolutional Networks, WSDM'21 [Example]
SAT Li et al. Spectral Adversarial Training for Robust Graph Neural Network, arXiv'22 [Example]

Defense Strategy

Methods Descriptions Examples
JaccardPurification Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 [Example]
SVDPurification Entezari et al. All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs, WSDM'20 [Example]
GNNGUARD Zhang et al. GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks, NeurIPS'20 [Example]
GUARD Li et al. GUARD: Graph Universal Adversarial Defense, arXiv'22 [Example]

More details of literatures and the official codes can be found at Awesome Graph Adversarial Learning.

Others

Methods Descriptions Examples
DropEdge Rong et al. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification, ICLR'20 [Example]
DropNode You et al. Graph Contrastive Learning with Augmentations, NeurIPS'20 [Example]
DropPath Li et al. MaskGAE: Masked Graph Modeling Meets Graph Autoencoders, arXiv'22' [Example]
Centered Kernel Alignment (CKA) Nguyen et al. Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth, ICLR'21 [Example]

❓ Known Issues

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

graphwar-0.1.0.tar.gz (98.8 kB view hashes)

Uploaded Source

Built Distribution

graphwar-0.1.0-py3-none-any.whl (159.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page