Link Prediction with Relational Hypergraphs

Authors: Xingyue Huang, Miguel Romero Orth, Pablo Barcelo, Michael M. Bronstein, Ismail Ilkan Ceylan

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we validate the power of the proposed architectures on various relational hypergraph benchmarks. The resulting model architectures substantially outperform every baseline for inductive link prediction and also lead to competitive results for transductive link prediction. ... We present a detailed empirical analysis to validate our theoretical findings (Section 6). Experiments for inductive and transductive link prediction with relational hypergraphs show that a simple HC-MPNNs architecture surpasses all existing baselines leading to competitive results. Our ablation studies on different model components justify the importance of our model design choices.
Researcher Affiliation Academia Xingyue Huang EMAIL Department of Computer Science, University of Oxford, UK Miguel Romero EMAIL Department of Computer Science, Universidad Católica de Chile, Chile Pablo Barceló EMAIL Institute for Mathematics and Comp. Engineering, Universidad Católica de Chile & IMFD, Chile Michael M. Bronstein EMAIL Department of Computer Science, University of Oxford, UK İsmail İlkan Ceylan EMAIL Department of Computer Science, University of Oxford, UK
Pseudocode No The paper describes methods using mathematical formulas and text, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor structured steps formatted like code.
Open Source Code Yes The code for experiments is provided in https://github.com/HxyScotthuang/HC-MPNN.
Open Datasets Yes Yadati (2020) constructed three inductive datasets, WP-IND, JF-IND, and MFB-IND from existing transductive datasets on relational hypergraphs: Wiki People (Guan et al., 2019), JF17K (Wen et al., 2016), and M-FB15K (Fatemi et al., 2020)... We evaluate HCNets on the link prediction task with relational hypergraphs, namely the publicly available FB-AUTO, MFB15K, (Fatemi et al., 2020), Wiki People (Guan et al., 2021), and JF17K2 (Wen et al., 2016).
Dataset Splits Yes We then randomly pick 70% of the generated graphs as the training set and the remaining 30% as the testing set. (See details in Appendix J). ... Dataset # seen vertices # train hyperedges # unseen vertices # relations # features # max arity WP-IND 4,463 4,139 100 32 37 4 JF-IND 4,685 6,167 100 31 46 4 MFB-IND 3,283 336,733 500 12 25 3 ... #train 6,778 305,725 61,104 415,375 #valid 2,255 38,223 15,275 39,348 #test 2,180 38,281 24,915 38,797
Hardware Specification Yes We ran all experiments on a single NVIDIA V100 GPU.
Software Dependencies No The paper mentions "custom Triton kernel" and "PyTorch geometric" but does not provide specific version numbers for these software components.
Experiment Setup Yes In all experiments, we consider a 2-layer MLP as the decoder and adopt layer normalization and dropout in all layers before applying Re LU activation and skip-connection. During the training, we remove edges that are currently being treated as positive tuples to prevent overfitting for each batch. We choose the best checkpoint based on its evaluation of the validation set. ... Tables 10, 11, and 18 provide detailed hyperparameters such as GNN Layer Depth, Hidden Dimension, Decoder Layer Depth, Optimizer (Adam), Learning Rate, Batch size, #Negative Sample, Epoch, Adversarial Temperature, Dropout, and Accumulation Iteration.