Structure Is All You Need: Structural Representation Learning on Hyper-Relational Knowledge Graphs

Authors: Jaejun Lee, Joyce Jiyoung Whang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental MAYPL outperforms 40 knowledge graph completion methods in 10 datasets, compared with different baseline methods on different datasets to be tested from diverse perspectives. Experimental results show that MAYPL outperforms 40 different methods on 10 benchmark datasets, where each benchmark is introduced to test a different perspective of a method.
Researcher Affiliation Academia 1School of Computing, KAIST, Daejeon, South Korea. Correspondence to: Joyce Jiyoung Whang <EMAIL>.
Pseudocode No The paper describes the methodology using mathematical formulations and textual descriptions in Section 4 (Structural Representation Learning on Hyper-Relational Knowledge Graphs) but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our codes are available at https://github.com/bdi-lab/MAYPL/.
Open Datasets Yes We use diverse baseline methods presented in Appendix A and different benchmark datasets detailed in Appendix B. All results of baselines are obtained from the baselines original papers or from the papers that introduced the datasets (Galkin et al., 2020; Guan et al., 2019; Ali et al., 2021; Yadati, 2020; Lee et al., 2023b). Table 11 shows the statistics of datasets for transductive link prediction on HKGs: WD50K (Galkin et al., 2020), Wiki People (Wang et al., 2021), and Wiki People (Guan et al., 2019).
Dataset Splits Yes Definition 3.2 (Transductive Link Prediction on HKGs). Given an HKG, G p V, R, Hq, H is decomposed into three pairwise disjoint sets, such as H Htr Hval Htst, where Htr is a training set, Hval is a validation set, and Htst is a test set. Table 11 shows the statistics of datasets for transductive link prediction on HKGs: WD50K (Galkin et al., 2020), Wiki People (Wang et al., 2021), and Wiki People (Guan et al., 2019), where |Htr|, |Hval|, |Htst| are specified for each dataset.
Hardware Specification Yes We ran MAYPL on NVIDIA RTX A6000 with d 256. We ran MAYPL on NVIDIA RTX 2080 Ti with d 32. For MFB-IND, we ran MAYPL on NVIDIA RTX A6000 with d 128.
Software Dependencies Yes We use Python 3.9, and Py Torch 2.0.1 with cuda version 11.7.
Experiment Setup Yes In our implementation of MAYPL, we use the Adam optimizer (Kingma & Ba, 2015), PRe LU (He et al., 2015) activation function, dropout (Srivastava et al., 2014), residual connection (He et al., 2016), label smoothing (Szegedy et al., 2016), and layer normalization (Ba et al., 2016). In the attentive neural message passing, we use multi-heads (Vaswani et al., 2017; Brody et al., 2022). Table 15 shows the best hyperparameters, runtime, and memory usage of MAYPL for WD50K, Wiki People, and Wiki People. The table includes values for ϵ, ebest, lr, r L, L, nhead, nbatch, δtr, δdrop.