SDMG: Smoothing Your Diffusion Models for Powerful Graph Representation Learning
Authors: Junyou Zhu, Langzhou He, Chao Gao, Dongpeng Hou, Zhen Su, Philip S. Yu, Juergen Kurths, Frank Hellmann
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments validate the effectiveness of our method, suggesting a promising direction for advancing diffusion models in graph representation learning. |
| Researcher Affiliation | Academia | 1Department of Complexity Science, Potsdam Institute for Climate Impact Research, 14473 Potsdam, Germany 2Machine Learning Group, Technical University of Berlin, 10587 Berlin, Germany 3Department of Computer Science, University of Illinois at Chicago, Chicago, IL 60607, USA 4School of Artificial Intelligence, Optics and Electronics (i OPEN), Northwestern Polytechnical University, Xi an 710072, China. |
| Pseudocode | No | The paper describes the methods textually and with equations and diagrams but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our implementation is available at: https://github.com/JYZHU03/SDMG. |
| Open Datasets | Yes | For node classification, we use six datasets: three citation networks (Cora, Cite Seer, Pub Med (Sen et al., 2008)), two co-purchase graphs (Photo, Computer (Shchur et al., 2018)), and the large-scale ar Xiv dataset from the Open Graph Benchmark (Hu et al., 2020a). For graph classification, we use five benchmarks: IMDB-B, IMDB-M, PROTEINS, COLLAB, and MUTAG (Yanardag & Vishwanathan, 2015). |
| Dataset Splits | Yes | For both the node-level and graph-level benchmarks, we adopt the commonly used public splits to ensure fair comparison with existing baselines (Yang et al., 2024). |
| Hardware Specification | Yes | Our experiments are conducted on 4 NVIDIA H100 GPUs. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with version numbers. |
| Experiment Setup | Yes | The hyperparameters employed in our experiments are detailed in Tables 4 and 5. Note that we did not extensively fine-tune these hyperparameters, suggesting that further optimization could potentially enhance the experimental results. |