FS-KEN: Few-shot Knowledge Graph Reasoning by Adversarial Negative Enhancing

Authors: Lingyuan Meng, Ke Liang, Zeyu Zhu, Xinwang Liu, Wenpeng Lu

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments conducted on two few-shot knowledge graph completion datasets reveal that FS-KEN surpasses other baseline models, achieving state-of-the-art results. [...] Section 4 Experiments and Discussion
Researcher Affiliation Academia 1National University of Defense Technology 2Shandong Computer Science Center(National Supercomputer Center in Jinan) EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology and framework with textual explanations and an illustration (Figure 2), but no structured pseudocode or algorithm blocks are provided.
Open Source Code No The paper does not contain an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We assessed the FS-KEN on two real-world few-shot datasets, i.e., NELL-One [Mitchell et al., 2018] and FB15K-237 [Bollacker et al., 2008].
Dataset Splits Yes For the NELL-One, the meta-evaluation and meta-test splits provided in the dataset were used for evaluating and testing few-shot tasks. [...] In the case of FB15K-237 [Bollacker et al., 2008], a minority ratio of 7:30 was selected for the target few-shot evaluation and test tasks. [...] Moreover, each test triplet is compared with 50 possible negative triplets.
Hardware Specification Yes The FS-KEN experiments were primarily executed using the PyTorch [Paszke et al., 2019] library and were performed on a single NVIDIA GeForce 3090Ti.
Software Dependencies No The FS-KEN experiments were primarily executed using the PyTorch [Paszke et al., 2019] library. No specific version number for PyTorch or other libraries is mentioned.
Experiment Setup Yes Furthermore, the few-shot instance count K was configured to 3. [...] For the step of generating closed subgraphs, we generate 2-hop subgraphs in NELL-One and 1-hop subgraphs in FB15K-237. We employed AdamW with a learning rate of 1e-5. Moreover, the training epochs of model were set to 5000, and the training batch size was set to 8. [...] The experimental results show that when λ1 and λ2 both take the value of 0.1, the model achieves the best performance on NELL-One dataset. For the FB15k-237 dataset, our model achieves best performance when λ1 = 1 and λ2 = 0.1.