COMRECGC: Global Graph Counterfactual Explainer through Common Recourse

Authors: Gregoire Fournier, Sourav Medya

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We benchmark our algorithm against strong baselines on four different real-world graphs datasets and demonstrate the superior performance of COMRECGC against the competitors. We also compare the common recourse explanations to the graph counterfactual explanation, showing that common recourse explanations are either comparable or superior, making them worth considering for applications such as drug discovery or computational biology.
Researcher Affiliation Academia 1Department of Mathematics, Statistics and Computer Science, University of Illinois Chicago, Chicago, USA 2Department of Computer Science, University of Illinois Chicago, Chicago, USA. Correspondence to: Gr egoire Fournier <EMAIL>.
Pseudocode Yes Algorithm 1 CR CLUSTERING(G, S, R) Algorithm 2 COMRECGC(ϕ, G, k, M, τ, R, n) Algorithm 3 COMRECGC for FC (ϕ, G, k, M, R) Algorithm 5 MULTI-HEAD VRRW(ϕ, G, k, M, τ)
Open Source Code Yes Reproducibility. We make our code available at https: //github.com/ssggreg/COMRECGC.
Open Datasets Yes We consider the datasets MUTAGENICITY (Riesen & Bunke, 2008; Kazius et al., 2005), NCI1 (Wale & Karypis, 2006), AIDS (Riesen & Bunke, 2008), and PROTEINS (Borgwardt et al., 2005; Dobson & Doig, 2003).
Dataset Splits Yes The training/validation/testing split is 80%/10%/10%
Hardware Specification No The paper mentions 'National Artificial Intelligence Research Resource (NAIRR) Pilot and the Texas Advanced Computing Center (TACC) Vista' in the Acknowledgment, but does not provide specific details on the CPU, GPU, or memory used for the experiments.
Software Dependencies No The paper mentions using a GCN model and the Adam optimizer, along with citations, but does not specify version numbers for any software libraries, frameworks, or programming languages used.
Experiment Setup Yes We train a base GNN model (GCN) (Kipf & Welling, 2017) for a binary classification task, consisting of three convolutional layers, a max pooling layer, and a fully connected layer... The model is trained with the Adam optimizer (Kingma & Ba, 2014) and a learning rate of 0.001 for 1000 epochs. Across all of our experiments, COMRECGC uses k = 5 heads, has probability of teleportation τ = 0.05, performs the random walk for M = 50000 steps, and selects R = 100 common recourse.