HePa: Heterogeneous Graph Prompting for All-Level Classification Tasks
Authors: Jia Jinghong, Lei Song, Jiaxing Li, Youyong Kong
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conducted a comprehensive experimental analysis of He Pa on three benchmark datasets. |
| Researcher Affiliation | Academia | 1Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, School of Computer Science and Engineering, Southeast University 2Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China EMAIL |
| Pseudocode | No | The paper describes methods using mathematical equations and textual descriptions, but does not contain explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific statements regarding the release of source code or links to a code repository. |
| Open Datasets | Yes | We conduct experiments on three benchmark datasets (Fu et al. 2020). IMDB (Internet Movie Database) dataset... DBLP (Digital Bibliography & Library Project) dataset... ACM (Association for Computing Machinery) dataset... |
| Dataset Splits | No | The paper mentions Dtrain, Dval, and Dtest, and describes the k-shot sampling for evaluation tasks (e.g., 'randomly select 200 5-shot NC task data'). However, it does not provide specific percentages or counts for the overall training, validation, and test splits of the IMDB, DBLP, and ACM benchmark datasets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions using R-GCN as the backbone, but does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the implementation. |
| Experiment Setup | No | The paper mentions the evaluation metric (Accuracy) and general data preparation for tasks. However, it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or other detailed training configurations in the main text. |