HeTa: Relation-wise Heterogeneous Graph Foundation Attack Model

Authors: Yuling Wang, Zihui Chen, Pengfei Jiao, Xiao Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments exhibit powerful attack performances and generalizability of our method. Extensive experiments on three public datasets demonstrate the effectiveness and generalizability of our proposed He Ta under node injection and evasion attacks.
Researcher Affiliation Academia Yuling Wang1,2 , Zihui Chen1,2 , Pengfei Jiao1,2 and Xiao Wang3 1 School of Cyberspace, Hangzhou Dianzi University 2 Data Security Governance Zhejiang Engineering Research Center, Hangzhou Dianzi University 3Beihang University EMAIL, xiao EMAIL
Pseudocode Yes The implementation details of the experiments are provided in Appendix B.3, and the pseudo-code for He Ta can be found in Appendix C.
Open Source Code No The paper references third-party GitHub repositories for datasets/models (e.g., "https://github.com/THUDM/HGB" and "https://github.com/seongjunyun/Graph Transformer Networks") but does not provide an explicit statement or link to the source code for its own described methodology.
Open Datasets Yes In our experimental evaluation, we utilize three HG datasets: DBLP1, ACM1, and IMDB2. Details of these datasets are displayed in Appendix B.1. 1https://github.com/THUDM/HGB 2https://github.com/seongjunyun/Graph Transformer Networks
Dataset Splits No The paper discusses node injection rates (e.g., "1%", "2%", "5%") for attacks and sampling subsets of training data, but it does not provide specific training, validation, and test dataset splits (e.g., percentages or counts) for the primary datasets used in the experiments.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes The implementation details of the experiments are provided in Appendix B.3. Appendix B.3 states: "Parameters: The parameters of He Ta are configured as follows: learning rate is 0.001, weight decay is 0.0001, hidden size is 64, dropout rate is 0.5, and the number of layers is 2. The optimization method is Adam. The parameter K is 5, the penalty term β is 1.8, and the attack steps M are 3."