Cognitive Fluctuations Enhanced Attention Network for Knowledge Tracing
Authors: Mingliang Hou, Xueyi Li, Teng Guo, Zitao Liu, Mi Tian, Renqiang Luo, Weiqi Luo
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our contributions are validated through extensive experiments on three real-world datasets, demonstrating significant improvements in length generalization and prediction performance. |
| Researcher Affiliation | Collaboration | 1Guangdong Institute of Smart Education, Jinan University, Guangdong, 510632, China 2TAL Education Group, Beijing, 102206, China 3School of Software Technology, Dalian University of Technology, Liaoning, 116622, China |
| Pseudocode | No | The paper describes the methodology using mathematical equations and textual explanations, but no explicit pseudocode blocks or algorithm listings are provided. |
| Open Source Code | Yes | Code https://pykt.org/ |
| Open Datasets | Yes | We evaluate the effectiveness of Fluc KT across three diverse real-world datasets, each representing different learning scenarios. Table 1 presents the statistics for all datasets. More detailed information of these three datasets can be found at Appendix A3.1. [Datasets: AL2005, BD2006, NIPS34] |
| Dataset Splits | No | The paper discusses evaluation across different 'Length of Interaction Sequences' (context window sizes) like 200, 400, 600, 800, 1000, but does not provide specific percentages or counts for training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'py KT (Liu et al. 2022b): A python library to benchmark deep learning-based knowledge tracing models' and states that 'Our study strictly adheres to py KT (Liu et al. 2022b) evaluation protocols', implying the use of this library. However, specific version numbers for pyKT or any other software dependencies are not provided. |
| Experiment Setup | No | The paper states 'Our study strictly adheres to py KT (Liu et al. 2022b) evaluation protocols and includes comprehensive hyperparameter tuning for all baselines,' and describes the prediction layer, but it does not explicitly provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific training configurations for Fluc KT or any baseline. |