Balancing Efficiency and Expressiveness: Subgraph GNNs with Walk-Based Centrality
Authors: Joshua Southern, Yam Eitan, Guy Bar-Shalom, Michael M. Bronstein, Haggai Maron, Fabrizio Frasca
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A comprehensive set of experimental results demonstrates that Hy MN provides an effective synthesis of expressiveness, efficiency, and downstream performance, unlocking the application of Subgraph GNNs to dramatically larger graphs. |
| Researcher Affiliation | Collaboration | 1Imperial College London 2Technion Israel Institute of Technology 3University of Oxford 4AITHYRA 5NVIDIA Research. Correspondence to: Joshua Southern <EMAIL>, Fabrizio Frasca <EMAIL>. |
| Pseudocode | Yes | Appendix C. Hybrid Marking Networks in Algorithmic Form. Algorithm 1 Hybrid Marking Network. |
| Open Source Code | Yes | Code to reproduce experimental results is available at https: //github.com/jks17/Hy MN/. |
| Open Datasets | Yes | OGB. We tested Hy MN on several datasets for graph property prediction from the OGB benchmark (Hu et al., 2020b). ... Peptides. ... from the LRGB benchmark (Dwivedi et al., 2022)... ZINC (Sterling & Irwin, 2015; G omez-Bombarelli et al., 2018)... Mal Net-Tiny... (Freitas et al., 2021). Reddit. ... REDDIT-BINARY (RDT-B) dataset (Morris et al., 2020a). |
| Dataset Splits | Yes | We considered the challenging scaffold splits proposed in (Hu et al., 2020a). ... We considered the predefined dataset splits and used the Mean Absolute Error (MAE) both as a loss and evaluaton metric. ... We used the evaluation procedure proposed in Xu et al. (2018), consisting of a 10-fold cross-validation and a metric with the best averaged validation accuracy across the folds. |
| Hardware Specification | Yes | All experiments were run on a single NVIDIA Ge Force RTX 3080 with 10GB RAM. ... Table 4: Results and timing comparisons using a Ge Force RTX 2080 8 GB for the Mal Net-Tiny dataset. |
| Software Dependencies | No | The paper mentions: "We implemented our method using Pytorch (Paszke et al., 2019) and Pytorch Geometric (Fey & Lenssen, 2019)." However, it does not provide explicit version numbers for these software components within the text. |
| Experiment Setup | Yes | We set the batch size to 128 for MOLHIV and 32 for the other benchmarks... We set the hidden dimension to be 300... We tuned the number of layers in 2, 4, 6, 8, 10, the number of layers post message-passing in 1, 2, 3, dropout after each layer in 0.0, 0.3, 0.5... The maximum number of epochs is set to 100 for all models... |