Contrastive Auxiliary Learning with Structure Transformation for Heterogeneous Graphs
Authors: Wei Du, Hongmin Sun, Hang Gao, Gaoyang Li, Ying Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments and analyses on five benchmark datasets without node features and three benchmark datasets with node features validate the effectiveness and efficiency of our novel method compared with several state-of-the-art methods. |
| Researcher Affiliation | Academia | 1 Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, College of Computer Science and Technology, Jilin University, China 2School of Life Sciences and Technology, Tongji University, China EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the model architecture and learning objectives using mathematical formulations and descriptive text, but it does not include any explicit pseudocode blocks or algorithms labeled as such. |
| Open Source Code | Yes | Code https://github.com/mlcb-jlu/CALHG |
| Open Datasets | Yes | Experiments are conducted on five widely-used heterogeneous graphs, including ACM, DBLP, and IMDB, which contain node features, from HGB (Lv et al. 2021) and Freebase and AMiner, which do not contain node features, from HINormer (Mao et al. 2023). |
| Dataset Splits | Yes | The datasets are split into training and test sets, with 80% of the labeled nodes used for training and 20% for testing. |
| Hardware Specification | Yes | All experiments are conducted on a single GTX 4090 GPU. |
| Software Dependencies | No | The paper does not explicitly state specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | The paper discusses the hyperparameters λ1 and λ2 and their sensitivity, stating, "We fix one of these hyperparameters and vary the other within the range of 0.1 to 1.0." However, it does not provide a comprehensive list of all specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer details) used for the final experiments. |