Community-Centric Graph Unlearning
Authors: Yi Li, Shichao Zhang, Guixian Zhang, Debo Cheng
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on five real-world datasets and three widely used GNN backbones have verified the high performance and efficiency of our CGE method, highlighting its potential in the field of graph unlearning. To evaluate the efficacy of CGE, we conducted a series of comprehensive experiments using four real-world datasets and three prevalent GNN backbones. The evaluation addresses the following questions: (1) Can CGE provide excellent and comparable model utility? (2) How efficient is CGE in practical applications? (3) Can CGE achieve deterministic graph unlearning? Additionally, a series of ablation studies were performed to examine CGE s superiority at each stage of the graph unlearning process. |
| Researcher Affiliation | Academia | 1Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, 541004, China 2Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, 541004, China 3School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, China 4Uni SA STEM, University of South Australia, Mawson Lakes, Adelaide, Australia EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods and processes using mathematical formulas and descriptive text, but it does not include any explicitly labeled pseudocode or algorithm blocks with structured steps. |
| Open Source Code | Yes | Code https://github.com/liiiyi/CCGU |
| Open Datasets | Yes | Datasets. We evaluated the CGE on four real-world datasets of various sizes, including Cora (Yang, Cohen, and Salakhudinov 2016), Citeseer (Yang, Cohen, and Salakhudinov 2016), CS (Shchur et al. 2018) and Reddit 2, all of which are commonly used in GNN evaluations. The large-scale Reddit dataset was specifically included to assess the unlearning framework s performance in a real-world context. The statistics for these datasets are summarized in Table 2. ... 2https://docs.dgl.ai/generated/dgl.data.Reddit Dataset.html |
| Dataset Splits | No | The paper states: "For each unlearning batch, 0.5% of the original dataset s nodes were randomly selected." This refers to the unlearning request size rather than the standard training, validation, and test splits for model development. |
| Hardware Specification | Yes | All experiments were conducted on an NVIDIA Tesla A800 GPU server running the Ubuntu 23.04 LTS operating system. |
| Software Dependencies | Yes | CGE is implemented using Python 3.8.19 and DGL3. |
| Experiment Setup | No | The paper states: "More detailed experimental settings are provided in Appendix B." However, specific hyperparameters such as learning rate, batch size, epochs, or optimizer settings are not detailed in the main text. |