On Measuring Long-Range Interactions in Graph Neural Networks

Authors: Jacob Bamberger, Benjamin Gutteridge, Scott Le Roux, Michael M. Bronstein, Xiaowen Dong

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we illustrate the proposed range measures, computing them for different tasks and topologies for nodeand graph-level tasks, both empirically and analytically. We then leverage this observation to analyze the LRGB benchmark by the range of models trained on its tasks. In this section, we present empirical results in this direction on both synthetic and real-world experiments. Furthermore, we include additional experiments on CORA, a known short-range task, and on heterophilic tasks (Platonov et al., 2023) in Appendix C.
Researcher Affiliation Collaboration 1University of Oxford 2AITHYRA. Correspondence to: Jacob Bamberger <EMAIL>, Benjamin Gutteridge <EMAIL>, Scott le Roux <EMAIL>.
Pseudocode No The paper describes methods mathematically and through textual explanations, but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes All code for reproducing experiments is available at https: //github.com/Ben Gutteridge/range-measure
Open Datasets Yes While some recent works have introduced and motivated long-range tasks more systematically (Liang et al., 2025), the Long Range Graph Benchmark (LRGB) (Dwivedi et al., 2022) remains ubiquitous. We use CORA, and train a graph Transformer and a GCN... We use our range measure to evaluate models trained on the heterophilic datasets AMAZON-RATINGS and ROMAN-EMPIRE (Platonov et al., 2023).
Dataset Splits No Figures 6 & 7 show evolution of model range over training for a subset of the validation split; 500 graphs for VOCSUPERPIXELS, 200 each for PEPTIDES. We report only validation results as range estimates were found to be highly consistent across splits.
Hardware Specification Yes All experiments were feasible and primarily performed on NVIDIA A10s. Some experiments were performed on NVIDIA H100s.
Software Dependencies No The paper mentions using 'torch.nn.Embedding module' and 'Graph Gym' for encoding and various GNN models, but does not provide specific version numbers for these or any other key software dependencies.
Experiment Setup Yes Table 4. Hyperparameters for LRGB experiments in Section 6.2.2. The #Param. row in a) and (b) lists the original parameter count from T onshoff et al. (2023) in parentheses alongside our decreased parameter account due to efficient one-hot encodings (see Appendix D). This table includes details like learning rate, dropout, number of layers, hidden dimension, and batch size for various models.