Uncertainty Modeling in Graph Neural Networks via Stochastic Differential Equations

Authors: Richard Bergna, Sergio Calvo Ordoñez, Felix Opolka, Pietro Lio, José Miguel Hernández Lobato

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results across several benchmarks demonstrate that our framework is competitive in out-of-distribution detection, robustness to noise, and active learning, underscoring the ability of LGNSDEs to quantify uncertainty reliably. 5 EXPERIMENTS
Researcher Affiliation Academia Richard Bergna1 , Sergio Calvo-Ordo nez2,3, Felix L. Opolka4, Pietro Li o4, Jose Miguel Hernandez-Lobato1 1Department of Engineering, University of Cambridge 2Mathematical Institute, University of Oxford 3Oxford-Man Institute of Quantitative Finance, University of Oxford 4Department of Computer Science and Technology, University of Cambridge Corresponding Author. Email: EMAIL
Pseudocode No The paper describes the methodology using mathematical equations and descriptive text, but it does not include any clearly labeled pseudocode blocks or algorithms.
Open Source Code No The paper does not provide an explicit statement about releasing its source code, nor does it include a link to a code repository.
Open Datasets Yes We evaluate LGNSDE on the following datasets: Cora (Sen et al., 2008), Cite Seer (Giles et al., 1998), Pub Med (Sen et al., 2008), and the Amazon co-purchasing graphs Computer (Mc Auley et al., 2015) and Photo (Shchur et al., 2018).
Dataset Splits Yes In conducting our experiments, we used the setup outlined in Shchur et al. (2018). This involved using 20 random weight initializations for datasets with fixed Planetoid splits and implementing 100 random splits for other datasets. ... Table 8: Dataset statistics before and after active learning. Dataset # Nodes # Links Training/Validation/Test Split Initial # Training Labels Final # Training Labels Cora 2,708 5,429 140/500/1000 Citeseer 3,327 4,732 120/500/1000 Computers 13,752 245,861 200/500/1000 Photo 7,650 119,081 160/500/1000 Pubmed 19,717 44,338 60/500/1000
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. It only mentions 'OOM' (out-of-memory) errors for some models, implying hardware limitations but no explicit specifications.
Software Dependencies No The paper mentions 'Optimizer Adam' and 'Method SRK RK4' but does not provide specific version numbers for any software dependencies or libraries used in the implementation (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes B.1 HYPERPARAMETER SEARCH Table 6: Hyperparameter Grid Search Configuration Hyperparameter Values Learning Rate {0.001, 0.005, 0.01, 0.1} Weight Decay {0.01, 0.001, 0.0005, 0.0001} Epoch {15, 100, 200, 300} Dropout {0.0, 0.1, 0.3, 0.5} Hidden Dimension {16, 32, 64, 128, 256} Step Size {0.01, 0.05, 0.1, 0.2}. Table 7: Hyperparameters left out of the grid search for all models and used for all datasets. Parameter GNSDE GNODE Other t1 1 1 N/A Optimizer Adam Adam Adam Method SRK RK4 N/A Early Stop 20 20 20 Diffusion GG 1.0 N/A N/A