Unified Graph Neural Networks Pre-training for Multi-domain Graphs
Authors: Mingkai Lin, Xiaobin Hong, Wenzhong Li, Sanglu Lu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the effectiveness of MDPGNN through theoretical analysis and extensive experiments on four real-world graph datasets, showing its superiority in enhancing GNN performance across diverse domains. |
| Researcher Affiliation | Academia | State Key Laboratory for Novel Software Technology, Nanjing University Nanjing, China EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Pre-training Process for MDP-GNN |
| Open Source Code | No | The paper does not explicitly state that source code for the described methodology is publicly available or provide a link to a repository. |
| Open Datasets | Yes | We evaluate MDP-GNN using four large-scale text-free graphs from distinct domains: (1) Academic (Hu et al. 2020a), a citation network with papers indexed by MAG (Wang et al. 2020); (2) Product (Hu et al. 2020a), an Amazon product co-purchasing network; (3) Reddit (Hamilton, Ying, and Leskovec 2017), a comment graph derived from Reddit; and (4) Yelp (Zeng et al. 2019), a social network formed from the Yelp platform. |
| Dataset Splits | Yes | For the testing graphs, we allocate up to 10% of the known labels for training and another 10% for validation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions a 'two-layer GT' as the GNN backbone and the 'Adam optimizer', but it does not specify version numbers for any software libraries or frameworks used. |
| Experiment Setup | Yes | We tune all the models for 1000 epochs using the Adam optimizer, with a learning rate of 0.005. Each experiment is conducted five times, and the mean results are reported. The conversion functions {ϕk} for feature integration are implemented with MLPs. For the bi-level connection learning of Eq. 4, the inner and outer loops are both set to 10 with a learning rate of 0.001, as well as for the inner step of in Eq. 6. The rest hyper-parameters are set as τ = 0.8, λ1 = 1, λ2 = 0.01, Np = 256. |