Heterogeneous Temporal Hypergraph Neural Network

Authors: Huan Liu, Pengfei Jiao, Mengzhou Gao, Chaochao Chen, Di Jin

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Detailed experimental results on three real-world HTG datasets verify the effectiveness of the proposed HTHGN for modeling highorder interactions in HTGs and demonstrate significant performance improvements. This section evaluates the proposed HTHGN and baselines on three real-world datasets: Yelp, DBLP, and AMiner. We conducted dynamic link prediction and new link prediction experiments to verify the gain of higher-order interactions on representation learning performance. Our comparative experimental results are summarized in Table 1. Ablation Study To verify the effectiveness of each module, we performed ablation experiments and reported the results as shown in Figure 6 and Appendix E.4.
Researcher Affiliation Academia 1School of Cyberspace, Hangzhou Dianzi University, Hangzhou, China 2Data Security Governance Zhejiang Engineering Research Center, Hangzhou, China 3College of Computer Science and Technology, Zhejiang University, Hangzhou, China 4College of Intelligence and Computing, Tianjin University, Tianjin, China EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using mathematical equations and prose, but does not include any structured pseudocode or algorithm blocks clearly labeled as such.
Open Source Code No The paper does not provide an explicit statement in the main text about the authors releasing their code or a link to a repository for the HTHGN implementation. It mentions "More setup and implementation details see Appendix B." but does not confirm code availability within the main body.
Open Datasets Yes This section evaluates the proposed HTHGN and baselines on three real-world datasets: Yelp, DBLP, and AMiner. Detailed experimental results on three real-world HTG datasets verify the effectiveness of the proposed HTHGN for modeling highorder interactions in HTGs and demonstrate significant performance improvements.
Dataset Splits Yes We held out the last 3 snapshots for testing and trained the model on the remaining snapshots. The link prediction uses all edges in T + 1-th snapshot as positive edges, while the new link prediction only evaluates edges that have not appeared.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments in the main text.
Software Dependencies No The paper does not provide specific details about ancillary software dependencies, including library names with version numbers, in the main text.
Experiment Setup No While the paper discusses the impact of hyperparameters like dimension (d) and layers (L) in Figures 3 and 4 as part of 'Parameter Sensitive Analysis', it does not provide specific, fixed values for all hyperparameters (e.g., learning rate, batch size, optimizer settings) used for the main experimental results in the main text. It refers to 'Appendix B' for more details, indicating these are not fully specified in the main body.