HiTuner: Hierarchical Semantic Fusion Model Fine-Tuning on Text-Attributed Graphs
Authors: Zihan Fang, Zhiling Cai, Yuxuan Zheng, Shide Du, Yanchao Tan, Shiping Wang
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results across benchmark datasets spanning various domains validate the effectiveness of the proposed framework. Our codes are available at: https://github.com/Zihan Fang11/Hi Tuner |
| Researcher Affiliation | Academia | 1College of Computer and Data Science, Fuzhou University, Fuzhou, China 2Key Laboratory of Intelligent Metro, Fujian Province University, Fuzhou, China 3College of Computer and Information Science, Fujian Agriculture and Forestry University, Fuzhou, China EMAIL, EMAIL, allison EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the method using mathematical formulations (e.g., Eq 1-15) and diagrams (Figure 1, Figure 2), but does not contain a dedicated pseudocode or algorithm block. |
| Open Source Code | Yes | Our codes are available at: https://github.com/Zihan Fang11/Hi Tuner |
| Open Datasets | Yes | Datasets We evaluate the proposed method on various types of datasets, with the relevant statistics summarized in Table 1. These datasets encompass three citation networks, including Cite Seer, Cora and Pub Med. Along with a social network dataset (Instagram), an E-commerce dataset from Amazon (Electronics-Photography, namely Photo) and a Wikipedia-based dataset (Wiki CS). |
| Dataset Splits | Yes | Specifically, the ratio of nodes used for the train/valid/test stage is 10%/10%/80%. |
| Hardware Specification | Yes | All experiments are conducted on NVIDIA A100 GPUs with 80GB memory. |
| Software Dependencies | No | By default, we employ LLa MA2-7B [Touvron et al., 2023] and BERT as the backbones, and SAGE as an instance of GNNs. For a fair comparison, we run 5 times and report the mean result and the standard deviation. ... We select three popular LMs: BERT (bert-base-uncased) [Devlin et al., 2019], De BERTa (deberta-base) and Sentence BERT (bert-base-nlimean-tokens) [Reimers and Gurevych, 2019]. The paper mentions specific models and their versions/sources, but does not specify software library versions (e.g., PyTorch, TensorFlow, Hugging Face Transformers library versions). |
| Experiment Setup | Yes | We test 3-layer architectures with a hidden dimension of 256, and each layer is accompanied by a batch operation. ... The trade-off parameter λ is chosen from 0.1 to 0.9 with step size 0.1, and the number of layers M varies from 2 to 8. |