Learning Concept Prerequisite Relation via Global Knowledge Relation Optimization

Authors: Miao Zhang, Jiawei Wang, Kui Xiao, Shihui Wang, Yan Zhang, Hao Chen, Zhifei Li

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on public datasets demonstrate the effectiveness of our GKROM, achieving state-of-the-art performance in concept prerequisite relation learning.
Researcher Affiliation Academia 1School of Computer Science and Information Engineering, Hubei University, China 2Hubei Key Laboratory of Big Data Intelligent Analysis and Application, China 3Key Laboratory of Intelligent Sensing System and Security, Ministry of Education, China EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods using mathematical equations and structured text, but no explicit section or figure labeled "Pseudocode" or "Algorithm" is present.
Open Source Code Yes Code https://github.com/wisejw/GKROM
Open Datasets Yes University Course1: This dataset (Liang et al. 2017) compiles information about computer science courses at U.S. universities. Lecture Bank2: This dataset (Li et al. 2019) includes five domains, with natural language processing and machine learning being among them. MOOC3: This dataset (Liang et al. 2017) focuses on online open courses and includes 406 concepts and video texts from 382 computer science courses. 1https://github.com/suderoy/PREREQ-IAAI-19 2https://github.com/Yale-LILY/Lecture Bank 3https://github.com/suderoy/PREREQ-IAAI-19
Dataset Splits Yes To ensure the fairness and accuracy of model evaluation, we follow the ratio of 8:1:1 to divide the dataset into training, validation, and test sets.
Hardware Specification Yes Experiments are performed on the server with one RTX 4090D 24GB GPU.
Software Dependencies No All models are implemented in Py Torch, and the construction of the graph neural network is accomplished using the Py Torch Geometric library. The paper mentions software tools but does not provide specific version numbers.
Experiment Setup Yes The random seed, learning rate, weight decay, and batch size are set to 25, 10 6, 10 5, and 4, respectively. The feature dimensions of the initial embeddings of both concepts and documents are 300. The first layer of the graph neural network contains 128 hidden units, and the second layer contains 512 hidden units. The loss weights α and β in the global loss function are set to 0.5 and 0.5, respectively.