Graph Neural Networks Gone Hogwild
Authors: Olga Solodova, Nick Richardson, Deniz Oktay, Ryan P Adams
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that this architecture outperforms other GNNs from this class on a variety of synthetic tasks inspired by multi-agent systems. ... We perform experiments on several synthetic datasets, motivated by prediction tasks which are of interest for multi-agent systems where distributed, asynchronous inference is desirable. ... In Table 1, we report synchronous performance of each GNN architecture on the synthetic tasks. For regression tasks (counting, sums, coordinates) task performance is calculated as the root mean squared error over the test dataset normalized by the root mean value of the test dataset prediction targets. For classification tasks (chains, MNIST) task performance is calculated as the mean test dataset classification error. Table 1 reports performance for each task, with mean and standard deviation taken across 10 dataset folds and 5 random parameter seeds. |
| Researcher Affiliation | Academia | Olga Solodova, Nick Richardson, Deniz Oktay, Ryan P. Adams Department of Computer Science Princeton University Princeton, NJ, USA EMAIL |
| Pseudocode | Yes | Algorithm 1 Simulated asynchronous GNN inference |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or provide a link to a code repository. It mentions "Our algorithm is in Appendix E" but this refers to a description, not an open-source release. |
| Open Datasets | Yes | We construct a dataset where each node has a position in R2 and neighbors within some radius are connected by an edge. ... For this experiment, we use MNIST images (those with 0/1 labels only) (Le Cun et al., 2010). ... We additionally perform experiments on benchmark datasets MUTAG (Srinivasan et al., 1996), PROTEINS (Borgwardt et al., 2005), and PPI (Hamilton et al., 2017) for node and graph classification to evaluate energy GNNs as a synchronous GNN architecture. |
| Dataset Splits | Yes | For PROTEINS and MUTAG, we perform 10-fold cross validation and report average classification accuracy and standard deviations in Table 3. For PPI, we use a 20/2/2 train/valid/test split consistent with Hamilton et al. (2017). For Peptides-func and Peptides-struct, we use a train/valid/test split consistent with Dwivedi et al. (2022) and report average precision and mean average error, respectively, in Table 5. |
| Hardware Specification | Yes | These experiments were performed on a single NVIDIA RTX 2080 Ti. |
| Software Dependencies | No | The paper mentions software like "Adam optimizer" and "L-BFGS" but does not specify their version numbers or the versions of general programming languages/libraries like Python or PyTorch. |
| Experiment Setup | Yes | We use the Adam optimizer with weight decay, where we set the optimizer parameters as α = 0.001, β1 = 0.9, β2 = 0.999. We set the learning rate to 0.002, and use exponential decay with rate 0.98 ever 200 epochs. In the forward pass for IGNN, we iterate on the node update equation until convergence of node embeddings, with a convergence tolerance of 10-5. The maximum number of iterations is set to 500. In the forward pass for the optimization-based GNNs, we use L-BFGS to minimize Eθ w.r.t node embeddings, with a convergence tolerance of 10-5. The maximum number of iterations is set to 50. We train for a maximum of 5000 epochs. |