When Graph Neural Networks Meet Dynamic Mode Decomposition
Authors: Dai Shi, Lequan Lin, Andi Han, Zhiyong Wang, Yi Guo, Junbin Gao
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our approach through extensive experiments on various learning tasks, including directed graphs, large-scale graphs, long-range interactions, and spatial-temporal graphs. We also empirically verify that our proposed models can serve as powerful encoders for link prediction tasks. The results demonstrate that our DMD-enhanced GNNs achieve state-of-the-art performance, highlighting the effectiveness of integrating DMD into GNN frameworks. ... Section 7 EXPERIMENTS We apply the proposed DMD GNNs to various learning tasks: 1) Node classification on both homophilic and heterophilic graphs; 2) Node classification on long-range graphs (Dwivedi et al., 2022); 3) spatial-temporal dynamic predictions; 4) As an efficient encoder for link prediction tasks. |
| Researcher Affiliation | Academia | University of Sydney Lequan Lin University of Sydney Andi Han Riken AIP Zhiyong Wang University of Sydney Yi Guo Western Sydney University Junbin Gao University of Sydney ... Equal contribution. Dai Shi is the corresponding author. # EMAIL. |
| Pseudocode | Yes | C.1 PSEUDOCODE OF DMD-GNN In addition to the flow chart in Figure. 1, below we show the pseudocode of DMD algorithm and training of DMD-GNN. Algorithm 1 Training Algorithm for DMD-GNN (Classification) Input: Input Graph adjacency A, initial GNN model and ground truth Y. Output: DMD-GNN prediction Accuracy ... Algorithm 2 Dynamic Mode Decomposition (DMD) Algorithm Input: Snapshots of the system: H(ℓ) and H(ℓ+ 1), truncation rate ξ. Output: DMD modes Ψ, Optional: Reconstructed operator K. |
| Open Source Code | Yes | Our code is available at https://github.com/EEthan Shi/Graph-DMD. |
| Open Datasets | Yes | For datasets, we include the homophilic datasets Cora, Citeseer and Pubmed, and heterophilic graphs Texas, Wisconsin and Cornell. In addition, to illustrate DMD-GNNs scalability, we test our models via OGB-ar Xiv (Hu et al., 2020). ... in LRGB from Pytorch-geometric, namely COCO-SP and Pascal VOC-SP ... We select three spatial-temporal graph datasets from torch-geometric-temporal (Rozemberczki et al., 2021b). |
| Dataset Splits | Yes | For all included datasets, we followed the standard data split scheme. ... The dataset is split at the edge level for training, validation, and testing. Specifically, 5% of the edges are randomly chosen as validation data, while 10% are used as test data. The remaining edges form the training set. |
| Hardware Specification | Yes | All experiments are conducted in Python 3.11 on one NVIDIA RTX 4090 GPU with 16384 CUDA cores and 24GB memory size. |
| Software Dependencies | Yes | All experiments are conducted in Python 3.11 on one NVIDIA RTX 4090 GPU with 16384 CUDA cores and 24GB memory size. ... All included baselines are implemented using torch.geometric ... We select three spatial-temporal graph datasets from torch-geometric-temporal (Rozemberczki et al., 2021b). |
| Experiment Setup | Yes | The maximum number of epochs is set as 200 for all included datasets except 500 for Ogbn-ar Xiv. ... for all DMD GNNs, we fixed the learning rate as 0.0005, weight decay as 5e 3, dropout = 0.1, and the hidden dimension as 256, we also fixed the batch size as 128 for training set, 500 for both validation and test set. ... for all experiments in STGs, we let learning rate to be 0.0005 and weight decay be 0.0001 with dropout as 0.8, and hidden dimension as 64. ... we let weight decay as 0.005, dropout as 0.7, and number of hidden dimensions as 16. |