Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Subgraph Permutation Equivariant Networks
Authors: Joshua Mitton, Roderick Murray-Smith
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating statistically indistinguishable results from the state-of-the-art on six out of seven benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods. |
| Researcher Affiliation | Academia | Joshua Mitton EMAIL; EMAIL School of Computing Science University of Glasgow, Glasgow, Scotland, UK. Roderick Murray-Smith EMAIL School of Computing Science University of Glasgow, Glasgow, Scotland, UK. |
| Pseudocode | Yes | We also present an algorithm for the model below: Input graph G for i 1 to K do Extract k ego network subgraph Hi Place Hi in bag SH|Hi| end for for each SH do for Hi in SH do H i f SH 0(ρ2 ρ2)(Hi) N i f SH 0(ρ2 ρ1)(Hi) end for end for for layers l 1 to L 1 do Pool features across subgraphs G SH for i 1 to K do Extract k ego network subgraph Hi Place Hi and Ni in bag SH|Hi| end for for each SH do for Hi in SH do H i f SH l(ρ2 ρ2)(Hi) N i f SH l(ρ2 ρ1)(Hi) H i f SH l(ρ1 ρ2)(Ni) N i f SH l(ρ1 ρ1)(Ni) end for end for end for for each SH do for Hi in SH do G i f SH L(ρ2 ρ0)(Hi) G i f SH L(ρ1 ρ0)(Hi) end for end for Pool graph features across subgraph bags G SH Update and predict graph classification target with an MLP model Where each function f is a function update with bases set given in Figure 5. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code for the methodology described. |
| Open Datasets | Yes | We tested our method on a series of 7 different real-world graph classification problems from the TUDatasets benchmark of (Yanardag & Vishwanathan, 2015). |
| Dataset Splits | Yes | Specifically, we conducted 10-fold cross validation and reported the average and standard deviation of validation accuracies across the 10 folds. |
| Hardware Specification | Yes | Figure 6 shows that the Global Permutation Equivariant Network (GPEN) (Maron et al., 2018) cannot scale beyond graphs with 500 nodes. On the other hand, our method (SPEN) scales to larger graphs of over an order of magnitude larger. In the situation where m = 3 GPEN can process graphs of size up to 500 nodes, while our SPEN can process graphs of size up to 10,000 nodes using less GPU memory. 1 TITAN RTX Limit |
| Software Dependencies | No | The paper mentions 'Network X package (Hagberg et al., 2008)' and 'Adam optimizer' but does not specify their version numbers or other software dependencies with versions. |
| Experiment Setup | Yes | For all experiments we used a 1-hop ego networks as this provides the most scalable version of our method. We trained the model for 50 epochs on all datasets using the Adam optimizer. ... For all datasets we use 6 automorphism equivariant layers with base GNN utilising ρ1 ρ2 representation space. |