Learning Long Range Dependencies on Graphs via Random Walks

Authors: Dexiong Chen, Till Schulz, Karsten Borgwardt

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental evaluations demonstrate that our approach achieves competitive performance on 19 graph and node benchmark datasets, notably outperforming existing methods by up to 13% on the Pascal Voc-SP and COCO-SP datasets.
Researcher Affiliation Academia Dexiong Chen, Till Hendrik Schulz & Karsten Borgwardt Max Planck Institute of Biochemistry, 82152 Martinsried, Germany EMAIL
Pseudocode No The paper describes the architecture of Neural Walker and its components (random walk sampler, walk embedder, sequence layer, walk aggregator, message passing) in sections 3.2 and 3.3. It also has a visual overview in Figure 2. However, there are no explicit pseudocode blocks or algorithms presented in a structured, code-like format.
Open Source Code Yes Our code is available at https://github.com/Borgwardt Lab/Neural Walker.
Open Datasets Yes To ensure diverse benchmarking tasks, we use datasets from Benchmarking-GNNs (Dwivedi et al., 2023), Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022), Open Graph Benchmark (OGB) (Hu et al., 2020a), and datasets from Platonov et al. (2022); Leskovec & Krevl (2014).
Dataset Splits Yes For each dataset, we follow their respective training protocols and use the standard train/validation/test splits and evaluation metrics.
Hardware Specification Yes Experiments were conducted on a shared computing cluster with various CPU and GPU configurations, including a mix of NVIDIA A100 (40GB) and H100 (80GB) GPUs. The run-time of each model was measured on a single NVIDIA A100 GPU. ... For the POKEC dataset, due to its large graph size, inference times were computed entirely on CPUs. ... measured on a single H100 GPU equipped with 8 AMD EPYC 9554 CPUs.
Software Dependencies No The paper states: "We implemented our models using Py Torch Geometric (Fey & Lenssen, 2019) (MIT License)" and mentions "Pytorch.profiler library". However, specific version numbers for these software components are not provided, which is required for a reproducible description.
Experiment Setup Yes Given the large number of hyperparameters and datasets, we did not perform an exhaustive search beyond the ablation studies in Section 5.3. For each dataset, we then adjusted the number of layers, the hidden dimension, the learning rate, the weight decay based on hyperparameters reported in the related literature (Ramp aˇsek et al., 2022; T onshoff et al., 2023b; Deng et al., 2024; T onshoff et al., 2023a). ... The detailed hyperparameters used in Neural Walker as well as the model sizes and runtime on different datasets are provided in Table 9, 10, 11, and 12.