Sparse Causal Discovery with Generative Intervention for Unsupervised Graph Domain Adaptation

Authors: Junyu Luo, Yuhao Tang, Yiwei Fu, Xiao Luo, Zhizhuo Kou, Zhiping Xiao, Wei Ju, Wentao Zhang, Ming Zhang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on multiple real-world datasets demonstrate that SLOGAN significantly outperforms existing baselines.
Researcher Affiliation Academia 1State Key Laboratory for Multimedia Information Processing, School of Computer Science, PKU-Anker LLM Lab 2Peking University 3University of California, Los Angeles 4Hong Kong University of Science and Technology 5Paul G. Allen School of Computer Science and Engineering, University of Washington. Correspondence to: Xiao Luo <EMAIL>, Zhiping Xiao <EMAIL>, Ming Zhang <EMAIL>.
Pseudocode Yes Algorithm 1 Optimization Algorithm of SLOGAN
Open Source Code No The paper does not explicitly state that source code is available. There is no mention of a code repository link, an explicit code release statement, or code being included in supplementary materials for the methodology described.
Open Datasets Yes We investigate unsupervised graph domain adaptation leveraging benchmark datasets, adopting both cross-dataset and dataset split scenarios for a thorough evaluation. For cross-dataset scenarios we achieve domain adaptation on PTC (Helma et al., 2001). ... The dataset splitting experiments are performed on the TWITTER-Real Graph-Partial (Pan et al., 2015), NCI1 (Wale & Karypis, 2006) and Letter-Med (Riesen & Bunke, 2008), using their diverse nature to evaluate our domain adaptation capability.
Dataset Splits Yes For cross-dataset scenarios we achieve domain adaptation on PTC (Helma et al., 2001). ... For dataset-split scenarios, we follow previous works (Ding et al., 2018; Yin et al., 2022; Lu et al., 2023) to split the dataset by graph density. ... The datasets are divided into four subsets (e.g., T0, T1, T2, and T3 for TWITTER-Real-Graph-Partial), organized by increasing levels of graph density.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We use a 2 layer GCN encoder with a hidden dimension of 128. We use Adam optimizer for 100 epochs source domain training with a learning rate of 0.001 and batch size of 128. For the target domain, adaptation is performed over 30 epochs. The loss weight, γ and η, are set to 0.003 and 0.1, according to the sensitivity experiments.