DGCPL: Dual Graph Distillation for Concept Prerequisite Relation Learning

Authors: Miao Zhang, Jiawei Wang, Jinying Han, Kui Xiao, Zhifei Li, Yan Zhang, Hao Chen, Shihui Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On three public benchmark datasets, we compare DGCPL with eight graph-based baseline methods and five traditional classification baseline methods. The experimental results show that DGCPL achieves state-of-the-art performance in learning concept prerequisite relations. Our code is available at https://github.com/wisejw/DGCPL.
Researcher Affiliation Academia 1School of Computer Science, Hubei University, China 2Hubei Key Laboratory of Big Data Intelligent Analysis and Application, China 3Key Laboratory of Intelligent Sensing System and Security, Ministry of Education, China EMAIL, EMAIL EMAIL
Pseudocode No The paper describes the methodology in prose and mathematical equations but does not include explicit pseudocode blocks or algorithm listings.
Open Source Code Yes Our code is available at https://github.com/wisejw/DGCPL.
Open Datasets Yes To evaluate the effectiveness of our proposed model, we select three public benchmark datasets. University Course Dataset (UCD)1: This dataset compiles course information [Liang et al., 2017] from the computer science domain across 11 universities in the United States, covering various topics such as algorithm design, computer graphics, and neural networks. Lecture Bank2: This dataset [Li et al., 2019] originates from online education platforms, covering five domains: natural language processing, machine learning, artificial intelligence, deep learning, and information retrieval. MOOC3: This dataset [Li et al., 2017] is derived from video playlists in the MOOC corpus and includes the subtitle texts of videos from 38 playlists in the computer science department. 1https://github.com/suderoy/PREREQ-IAAI-19 2https://github.com/Yale-LILY/Lecture Bank 3https://github.com/suderoy/PREREQ-IAAI-19
Dataset Splits Yes We split each datasets into training, validation, and test sets with a ratio of 8:1:1.
Hardware Specification Yes All experiments is implemented on the Linux sever with one RTX 4090D GPU using the Py Torch framework.
Software Dependencies No The paper mentions "Py Torch framework" and "BERT [Devlin et al., 2019]" but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes Our proposed model is trained using the Adam optimizer for a total of 50 epochs. The learning rate lr, batch size, and classification prediction threshold γ are set to 1E-4, 16, and 0.50. For the UCD, Lecture Bank, and MOOC datasets, the number of graph neural network layers ℓis set to 3, 2, and 2; the weight decay is set to 1E-4, 1E-2, and 1E-3; and the distillation loss weight λ in the overall loss function is set to 1E-6, 1E-1, and 1E-5, respectively.