S²DN: Learning to Denoise Unconvincing Knowledge for Inductive Knowledge Graph Completion
Authors: Tengfei Ma, Yujie Chen, Liang Wang, Xuan Lin, Bosheng Song, Xiangxiang Zeng
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on three benchmark KGs demonstrate that S2DN surpasses the performance of state-of-the-art models. These results demonstrate the effectiveness of S2DN in preserving semantic consistency and enhancing the robustness of filtering out unreliable interactions in contaminated KGs. Experiments We carefully consider the following key research questions: RQ1) Does S2DN outperform other state-of-the-art inductive KGC baselines? RQ2) Are the proposed Semantic Smoothing and Structure Refining modules effective? RQ3) Can S2DN enhance the semantic consistency of the relations and refine reliable substructure surrounding the target facts? Experiment Setup Dataset & Evaluation. We utilize three widely-used datasets: WN18RR (Dettmers et al. 2018), FB15k-237 (Toutanova et al. 2015), and NELL-995 (Xiong, Hoang, and Wang 2017), to evaluate the performance of S2DN and baseline models. |
| Researcher Affiliation | Academia | 1College of Computer Science and Electronic Engineering, Hunan University, China 2NLPR, MAIS, Institute of Automation, Chinese Academy of Sciences 3School of Artificial Intelligence, University of Chinese Academy of Sciences 4College of Computer Science, Xiangtan University, China EMAIL, EMAIL, jack EMAIL, EMAIL |
| Pseudocode | No | The paper describes its methodology through equations (e.g., Eq 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12) and textual descriptions of modules like 'Semantic Smoothing' and 'Structure Refining', but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/xiaomingaaa/SDN |
| Open Datasets | Yes | We utilize three widely-used datasets: WN18RR (Dettmers et al. 2018), FB15k-237 (Toutanova et al. 2015), and NELL-995 (Xiong, Hoang, and Wang 2017), to evaluate the performance of S2DN and baseline models. |
| Dataset Splits | Yes | Following (Teru, Denis, and Hamilton 2020; Zhang et al. 2023b), we use the same four subsets with increasing size of the three datasets. Each subset comprises distinct training and test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not mention specific software dependencies or their version numbers (e.g., libraries, frameworks, or programming language versions) used for the implementation. |
| Experiment Setup | No | The paper discusses the general framework and modules of S2DN, but it does not explicitly provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations within the main text. |