Beyond Random Masking: When Dropout meets Graph Convolutional Networks

Authors: Yuankai Luo, Xiao-Ming Wu, Hao Zhu

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theoretical findings are validated through extensive experiments on both node-level and graph-level tasks across 14 datasets. Notably, GCN with dropout and batch normalization outperforms state-of-the-art methods on several benchmarks, demonstrating the practical impact of our theoretical insights.
Researcher Affiliation Collaboration Yuankai Luo1,2 & Xiao-Ming Wu2 & Hao Zhu3, 1Beihang University, Beijing, China 2The Hong Kong Polytechnic University, Hong Kong 3Data61r CSIRO, Sydney, Australia
Pseudocode No The paper contains mathematical formulas, theorems, and proofs within the 'Theoretical Framework' section, but no structured pseudocode or algorithm blocks are explicitly presented.
Open Source Code Yes Our code is available at https://github. com/LUOyk1999/dropout-theory.
Open Datasets Yes For node-level tasks, we used 10 datasets: Cora, Cite Seer, Pub Med (Sen et al., 2008), ogbn-arxiv, ogbn-products (Hu et al., 2020), Amazon-Computer, Amazon-Photo, Coauthor-CS, Coauthor-Physics (Shchur et al., 2018), and Wiki CS (Mernyei & Cangea, 2020)... For graph-level tasks, we used MNIST, CIFAR10 (Dwivedi et al., 2023), and two Peptides datasets (functional and structural) (Dwivedi et al., 2022).
Dataset Splits Yes Cora, Cite Seer, and Pub Med are citation networks, evaluated using the semi-supervised setting and data splits from Kipf & Welling (2017). We used the standard 60%/20%/20% training/validation/test splits and accuracy as the evaluation metric (Chen et al., 2022; Shirzad et al., 2023; Deng et al., 2024). For Wiki CS, we adopted the official splits and metrics (Mernyei & Cangea, 2020). For large-scale graphs, we included ogbn-arxiv and ogbn-products with 0.16M to 2.4M nodes, using OGB s standard evaluation settings (Hu et al., 2020).
Hardware Specification Yes The experiments are conducted on a single workstation with 8 RTX 3090 GPUs.
Software Dependencies No The paper mentions 'Py Torch Geometric library' but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes For node-level tasks, we adhered to the training protocols specified in (Deng et al., 2024; Luo et al., 2024b;a), employing BN and adjusting the dropout rate between 0.1 and 0.7. In graph-level tasks, we adopted the settings from (T onshoff et al., 2023; Luo et al., 2025), utilizing BN with a consistent dropout rate of 0.2. All experiments were run with 5 different random seeds, and we report the mean accuracy and standard deviation.