Adaptive Graph Unlearning

Authors: Pengfei Ding, Yan Wang, Guanfeng Liu, Jiajie Zhu

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on seven real-world graphs demonstrate that AGU outperforms existing methods in terms of effectiveness, efficiency, and unlearning capability. We conduct extensive experiments to answer the following research questions: RQ1: How does AGU perform compared to state-of-the-art methods? RQ2: How does each proposed module in AGU contribute to the overall performance? RQ3: Can our proposed neighbor selection strategies enhance the performance of existing methods? RQ4: How do different parameter settings influence AGU s performance?
Researcher Affiliation Academia Pengfei Ding , Yan Wang , Guanfeng Liu , Jiajie Zhu Macquarie University, Sydney, Australia EMAIL, EMAIL
Pseudocode No The paper describes its methodology using descriptive text, mathematical formulas, and a framework overview figure. There are no explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes [Ding et al., 2025] Pengfei Ding, Yan Wang, Guanfeng Liu, and Jiajie Zhu. AGU Appendix. https://github.com/ Aliezzz/AGU, 2025.
Open Datasets Yes Datasets. We select seven widely used datasets: Cora, Citeseer, Pub Med [Yang et al., 2016], Amazon-Photo, Amazon Computers, Coauthor-CS [Shchur et al., 2018], and Flickr [Zeng et al., 2019].
Dataset Splits Yes The datasets are split following the guidelines of recent GU studies [Cheng et al., 2023; Li et al., 2024b], with 80% of the nodes used for training and 20% for testing.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments. It mentions using GNNs as backbones, but no specific GPU or CPU models are provided.
Software Dependencies No The paper mentions various GNN models (GCN, SGC, GAT, SAGE, GIN) but does not provide specific version numbers for any software, libraries, or programming languages used.
Experiment Setup Yes For all methods, we set the embedding dimension to 64, and fix the number of GNN layers at 2. Baseline parameters are initialized using the values reported in the original papers and further fine-tuned for optimal performance. In AGU, the edge unlearning loss LEU uses concatenation for φ( ) and mean-squared error for dis( ), while Eq. (9) uses cosine similarity for dis( ). The loss coefficient parameter α is set to 0.1. ... minimal training epochs (20 30). ... the optimal range for θ is between 5e 5 and 5e 4. For kans, a value around 40% can achieve a good trade-off between effectiveness and efficiency.