Higher Order Structures for Graph Explanations

Authors: Akshit Sinha, Sreeram Vennam, Charu Sharma, Ponnurangam Kumaraguru

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluation shows that on average real-world datasets from the Graph XAI benchmark and synthetic datasets across various graph explainers, FORGE improves average explanation accuracy by 1.9x and 2.25x, respectively. We perform ablation studies to confirm the importance of higher-order relations in improving explanations, while our scalability analysis demonstrates FORGE s efficacy on large graphs.
Researcher Affiliation Academia International Institute of Information Technology, Hyderabad EMAIL
Pseudocode Yes Algorithm 1: Lifting algorithm Input: G(V, E) Output: X(C, Σ)
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code or a link to a repository for the methodology described.
Open Datasets Yes We take real-world datasets from the graph explainability benchmark, Graph XAI (Agarwal et al. 2023), which includes Benzene, Mutagenicity, Alkyl Carbonyl, and Fluoride Carbonyl.
Dataset Splits No The paper states: "The reported results are averaged over 10 different seeds" and "Specific details about dataset generation can be found in the Appendix." However, it does not provide specific details on dataset splits (e.g., percentages for training, validation, and test sets) for either the real-world or synthetic datasets within the main text.
Hardware Specification No The paper does not specify any particular hardware used for running the experiments (e.g., GPU models, CPU types, or memory).
Software Dependencies No The paper does not specify any particular software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA x.x).
Experiment Setup No The paper mentions: "The reported results are averaged over 10 different seeds" and "For reproducibility of our results, all implementation details are contained in the Appendix." However, it does not provide specific hyperparameters (e.g., learning rate, batch size, number of epochs) or system-level training settings in the main text.