HG-Adapter: Improving Pre-Trained Heterogeneous Graph Neural Networks with Dual Adapters
Authors: YUJIE MO, Runpeng Yu, Xiaofeng Zhu, Xinchao Wang
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Theoretical analysis indicates that the proposed method achieves a lower generalization error bound than existing methods, thus obtaining superior generalization ability. Comprehensive experiments demonstrate the effectiveness and generalization of the proposed method on different downstream tasks. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, University of Electronic Science and Technology of China 2National University of Singapore |
| Pseudocode | Yes | B ALGORITHM This section provides the pseudo-code of the proposed method in Section B.1 and the complexity analysis in Section B.2. B.1 ALGORITHM Algorithm 1 The pseudo-code of the proposed method. Input: Heterogeneous graph G = (V, E, A, R, ϕ, φ), pre-trained HGNN model, maximum training steps E; Output: Homogeneous and heterogeneous adapters, projection gρ; 1: Initialize parameters and upload pre-trained parameters; 2: while not reaching E do 3: Obtain homogeneous graph structure A by Eq. (7); 4: Obtain the homogeneous representations Z by Eq. (7); 5: obtain heterogeneous graph structure S by Eq. (9); 6: Obtain the heterogeneous representations ˆZ by Eq. (10); 7: Conduct the label propagation by Eq. (11); 8: Conduct the label-propagated contrastive loss by Eq. (12); 9: Conduct the feature reconstruction loss by Eq. (13); 10: Conduct the margin loss by Eq. (14); 11: Compute the objective function J by Eq. (15); 12: Back-propagate J to update model weights; 13: end while |
| Open Source Code | Yes | The code of the proposed method is released at https://github.com/Yujie Mo/HG-Adapter. |
| Open Datasets | Yes | The used datasets include three academic datasets (i.e., ACM (Wang et al., 2019), DBLP (Wang et al., 2019), and Aminer (Hu et al., 2019)), and one business dataset (i.e., Yelp (Lu et al., 2019)). |
| Dataset Splits | Yes | Table 3: Statistics of all datasets. Datasets #Nodes #Node Types #Edges #Edge Types Target Node/Edge #Training #Test #Class ACM 8,994 3 25,922 4 Paper 600 2,125 3 Yelp 3,913 4 72,132 6 Bussiness 300 2,014 3 DBLP 18,405 3 67,946 4 Author 800 2,857 4 Aminer 55,783 3 153,676 4 Paper 80 1,000 4 |
| Hardware Specification | Yes | All experiments were implemented in Py Torch and conducted on a server with 8 NVIDIA Ge Force 3090 (24GB memory each). |
| Software Dependencies | No | All experiments were implemented in Py Torch and conducted on a server with 8 NVIDIA Ge Force 3090 (24GB memory each). While PyTorch is mentioned, a specific version number is not provided, nor are other key software components with versions. |
| Experiment Setup | Yes | In the proposed method, all parameters were optimized by the Adam optimizer (Kingma & Ba, 2015) with an initial learning rate. In all experiments, we repeat the experiments five times for all methods and report the average results. |