From Continuous Dynamics to Graph Neural Networks: Neural Diffusion and Beyond
Authors: Andi Han, Dai Shi, Lequan Lin, Junbin Gao
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this survey, we provide the first comprehensive review of studies that leverage the continuous perspective of GNNs. To this end, we introduce foundational ingredients for adapting continuous dynamics to GNNs, along with a general framework for the design of graph neural dynamics. We then review and categorize existing works based on their driven mechanisms and underlying dynamics. We also summarize how the limitations of classic GNNs can be addressed under the continuous framework. We conclude by identifying multiple open research directions. Section 7 summarizes empirical experimental procedures for evaluating the performance of different graph neural dynamics (of other works). |
| Researcher Affiliation | Academia | Andi Han EMAIL University of Sydney; Dai Shi EMAIL University of Sydney; Lequan Lin EMAIL University of Sydney; Junbin Gao EMAIL University of Sydney |
| Pseudocode | No | The paper is a survey and theoretical in nature, describing various dynamics and models from other research papers but does not include any pseudocode or algorithm blocks for its own contributions. |
| Open Source Code | No | We refer to https://github.com/twitter-research/graph-neural-pde for typical implementation of continuous GNNs. (This reference points to a general implementation of continuous GNNs, not source code specifically released by the authors for the methodology described in this survey paper.) |
| Open Datasets | Yes | Common benchmark datasets include citation networks, Cora (Mc Callum et al., 2000), Citeseer (Sen et al., 2008), Pubmed (Namata et al., 2012), co-authorship graphs, including Coauthor CS, Coauthor Physics (Shchur et al., 2018), co-purchase graphs, including Computer, and Photo (Mc Auley et al., 2015). ... One popular dataset is PPI (Hamilton et al., 2017), where each graph corresponds to a different human tissue documenting protein-protein interactions. ... Common large-scale graph benchmarks include OGB-arxiv, OGB-proteins and OGB-products (Hu et al., 2020). |
| Dataset Splits | Yes | Node-level classification The most widely considered task is node classification, which aims to classify test nodes in a graph under semi-supervised setting (Kipf & Welling, 2017). For this task, the output Y in (4) is passed through a linear layer for predicting the probabilities for each class. Common benchmark datasets include citation networks, Cora (Mc Callum et al., 2000), Citeseer (Sen et al., 2008), Pubmed (Namata et al., 2012), co-authorship graphs, including Coauthor CS, Coauthor Physics (Shchur et al., 2018), co-purchase graphs, including Computer, and Photo (Mc Auley et al., 2015). |
| Hardware Specification | No | The paper is a survey and does not conduct its own experiments. Therefore, it does not provide specific hardware details used for experimental runs. |
| Software Dependencies | No | The forward and backward propagation schemes can be implemented through torchdiffeq package (Chen, 2018). (A software package is mentioned, but no specific version number is provided.) |
| Experiment Setup | No | The paper is a survey of existing research and does not present its own experimental setup or hyperparameter details as it does not conduct experiments. |