Mixed-Curvature Multi-Modal Knowledge Graph Completion
Authors: Yuxiao Gao, Fuwei Zhang, Zhao Zhang, Xiaoshuang Min, Fuzhen Zhuang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three widely used benchmarks demonstrate the effectiveness of our method. |
| Researcher Affiliation | Collaboration | 1 Institute of Artificial In telligence, Beihang University, Beijing, China 2 Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 3 The Sixth Research Institute of China Electronics Corporation, Beijing, China 4 Zhongguancun Laboratory, Beijing, China |
| Pseudocode | No | The paper describes the methodology in narrative text and figures but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that its source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate our model on three publicly available multi-modal KGC benchmarks: MKG-W (Xu et al. 2022), MKG-Y (Xu et al. 2022), and DB15K (Liu et al. 2019). |
| Dataset Splits | Yes | Table 1: Statistics of the datasets. Datasets Entities Relations Training triplets Validation triplets Test triplets DB15K 12842 279 79222 9902 9904 MKG-W 15000 169 34196 4276 4274 MKG-Y 15000 28 21310 2665 2663 |
| Hardware Specification | Yes | All the experiments are conducted on a RTX 3090 GPU. |
| Software Dependencies | No | Our model is implemented using Py Torch (Paszke et al. 2019). While PyTorch is mentioned, a specific version number is not provided, and no other key software components are listed with version numbers. |
| Experiment Setup | Yes | The learning rate is chosen from {0.5, 0.1, 0.05, 0.01, 0.005}, and the regularization parameter is selected from {0.1, 0.05, 0.01}. The batch size is chosen from {256, 512, 1024}, and the dimension size is set at 256. We use Adagrad (Duchi, Hazan, and Singer 2011) as our optimizer and apply N3 regularization (Chami et al. 2020) to constrain the model parameters. |