Gradient Inversion Attack on Graph Neural Networks

Authors: Divya Anand Sinha, Yezi Liu, Ruijie Du, Athina Markopoulou, Yanning Shen

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Theoretical analysis and empirical validation demonstrate that, by leveraging the unique properties of graph data and GNNs, GLG achieves more accurate reconstruction of both nodal features and graph structure from gradients.
Researcher Affiliation Academia Divya Anand Sinha EMAIL University of California, Irvine Ruijie Du EMAIL University of California, Irvine Yezi Liu EMAIL University of California, Irvine Athina Markopolou EMAIL University of California, Irvine Yanning Shen EMAIL University of California, Irvine
Pseudocode Yes A Attack Algorithms Algorithm 1 GLG (Node Attacker). Algorithm 2 GLG (Node Attacker 2) Algorithm 3 Graph Attacker
Open Source Code No The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Git Hub (Rozemberczki et al., 2021): Nodes represent developers on Git Hub and edges are mutual follower relationships. Facebook Page Page (FB) (Rozemberczki et al., 2019): A page-page graph of verified Facebook sites. OGBN-Arxiv (Hu et al., 2020): A large-scale citation network dataset from the Open Graph Benchmark (OGB). TUDataset (Morris et al., 2020). MUTAG (Debnath et al., 1991): It consists of 188 chemical compounds from two classes. COIL-RAG (Riesen & Bunke, 2008): Dataset in computer vision where images of objects are represented as region adjacency graphs. FRANKENSTEIN (Kazius et al., 2005): It contains graphs representing chemical molecules.
Dataset Splits No The paper describes how samples are chosen for the attack process (e.g., '20 randomly selected nodes from each dataset', 'randomly sample a node and its 3-hop neighborhood as the subgraph'), but it does not specify the training, validation, and test splits for the initial GNN models being attacked, which would be necessary for reproduction of the GNN training itself.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., exact GPU models, CPU specifications, or memory amounts).
Software Dependencies No The paper mentions using 'Adam as the optimizer' but does not specify its version or other software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions).
Experiment Setup Yes In the experiments, a 2-layer GNN model with hidden dimension 100 and a sigmoid activation function is employed. The weights of the model are randomly initialized. For all the experiments, the dummy nodal features are initialized randomly, with each entry sampled from the standard normal distribution i.e., N(0, 1). The dummy adjacency matrix is initialized by randomly setting its entries to 0 or 1. The hyperparameters values for the feature smoothness and the sparsity regularizers are set to α = 10 9 and β = 10 7. In all the experiments, Adam is used as the optimizer. All reported results represent averages taken over 20 independent runs of each attack.