Domain Adaptive Unfolded Graph Neural Networks

Authors: Zepeng Zhang, Olga Fink

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five real-world datasets demonstrate that the UGNNs integrated with CP outperform state-of-the-art GDA baselines.
Researcher Affiliation Academia Intelligent Maintenance and Operations Systems (IMOS) Lab Ecole Polytechnique F ed erale de Lausanne (EPFL), Lausanne, Switzerland EMAIL, EMAIL
Pseudocode No The paper describes methods using mathematical equations and textual explanations, such as in "Proposed Methodology Design" and "Unfolded Graph Neural Networks" sections, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/zepengzhang/DAUGNN
Open Datasets Yes We conduct experiments on three citation networks, namely ACMv9 (A), Citationv1 (C), and DBLPv7 (D) (Zhang et al. 2021), and two social networks, namely Germany (DE) and England (EN) (Rozemberczki and Sarkar 2021).
Dataset Splits Yes We use 80% of labeled nodes in the source domain for training, 20% of the labeled nodes in the source domain for validation, and all the nodes in the unlabeled target domain for testing.
Hardware Specification Yes All the experiments are conducted on a Tesla V100 GPU.
Software Dependencies No The paper mentions using the Adam optimizer (Kingma and Ba 2015) but does not provide specific version numbers for programming languages, libraries, or other software dependencies.
Experiment Setup Yes The node representation dimension is set to 128, and the number of layers is set to 8. We use the Adam optimizer (Kingma and Ba 2015). We apply grid search for the learning rate and the weight decay parameter in the range of {1e-4, 5e-4,1e-3, 5e-3}. The trade-off parameter ξ for the MMD loss is searched in the range of {1,2,3,4,5}. For APPNP and GPRGNN, there is an additional teleport parameter α which is searched in the range of {0.1, 0.2, 0.5}. For Elastic GNN, there are two additional parameters λ1 and λ2 which are searched in the range of {3,6,9}.