Topology-Aware Dynamic Reweighting for Distribution Shifts on Graph

Authors: Weihuang Zheng, Jiashuo Liu, Jiaxing Li, Jiayun Wu, Peng Cui, Youyong Kong

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments: Experimental results on standard OOD node classification datasets demonstrate the effectiveness of the TAR framework, showing its superiority in addressing distributional shift problems.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Southeast University 2Department of Computer Science and Technology, Tsinghua University. Correspondence to: Youyong Kong <EMAIL>.
Pseudocode Yes Algorithm 1 Topology-Aware Dynamic Reweighting (TAR) Scheme
Open Source Code No The paper does not contain any explicit statement about releasing code or provide a link to a repository. The text in Appendix C mentions: "Our implementation is under the architecture of Py Torch (Paszke et al., 2019) and Py G (Fey & Lenssen, 2019)." This refers to third-party tools used, not the authors' own implementation code.
Open Datasets Yes We conduct experiments on five widely used node classification datasets from GOOD benchmark (Gui et al., 2022) to validate the effectiveness of TAR in improving out-of-distribution (OOD) generalization. We use five node classification datasets under both concept shift and covariate shift (the detailed definition of these two shift are provided in Appendix B) : Web KB (Pei et al., 2020), CBAS (Ying et al., 2019), Twitch (Rozemberczki & Sarkar, 2020), Cora (Bojchevski & G unnemann, 2017) and Arxiv (Hu et al., 2020).
Dataset Splits Yes We followed the GOOD benchmark (Gui et al., 2022) for data splitting, a standard widely adopted in prior research (Sui et al., 2023; Liu et al., 2023; Guo et al., 2024).
Hardware Specification Yes All of our experiments are run on one Ge Force RTX 3090 with 24GB.
Software Dependencies Yes The detailed versions of some key packages are listed below: python: 3.8 pytorch: 1.13.1
Experiment Setup Yes For GCN, we configured the models with 3 layers, a hidden dimension of 300, a dropout rate of 0.5, and a learning rate of 0.01. For Polynormer, the layer of the local module is set to 5 and the global module is set to 1, with a hidden dimension of 512 and a learning rate of 0.001. Throughout all experiments, we employed the Adam optimizer with a weight decay of 0. The searching spaces for all the hyper-parameters of TAR are as follows. Entropy term β: {1, 0.1, 0.01, 0.001}. TAR inner learning rate γ: {0.1, 0.01, 0.001}. Gradient flow iterations Tin: {1, 3, 5, 10, 20}. Graph extrapolation ratio: {0.0, 0.2, 0.4}