Explicit and Implicit Examinee-Question Relation Exploiting for Efficient Computerized Adaptive Testing
Authors: Changqian Wang, Shangshang Yang, Siyu Song, Ziwen Wang, Haiping Ma, Xingyi Zhang, Bo Jin
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness and efficiency of our framework through comprehensive experiments on realworld datasets. In this section, we conduct experiments on four real-world datasets to validate the effectiveness and efficiency of RECAT. We focus on the following research questions: (RQ1) Can RECAT outperform the performance achieved by solely utilizing either explicit or implicit relation in prediction task? (RQ2) How does RECAT perform in the simulation of ability estimation scenario? (RQ3) Can RECATbased approaches reduce the latency of policy-based approaches? (RQ4) How adaptive is RECAT to new questions? |
| Researcher Affiliation | Academia | 1Institutes of Physical Science and Information Technology, Anhui University, China 2School of Artificial Intelligence, Anhui University, China 3 School of Computer Science and Technology, Anhui University, China 4Department of Information Materials and Intelligent Sensing Laboratory of Anhui Province, China 5School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, China 6Guangdong Medical University, Zhanjiang, China EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods using mathematical formulations and descriptive text, but no explicit pseudocode or algorithm blocks are provided. |
| Open Source Code | Yes | Our code is available at https://github.com/Changqian Wang/ Intelligent-Education/tree/main/RECAT. |
| Open Datasets | Yes | We conducted experiments on four real-world datasets, namely ASSIST09, ASSIST12, Nips-EDU, and Math. ASSIST09 (Feng, Heffernan, and Koedinger 2009) and ASSIST12 (Pardos et al. 2013) are publicly available datasets derived from the online tutoring system ASSISTments. NIPS-EDU (Wang et al. 2020b) is a dataset provided by the NeurIPS 2020 Education Challenge, which is collected from the educational platform Eedi. |
| Dataset Splits | No | The paper mentions training and testing phases with S = {Strain, Sval, Stest} and that preprocessing procedures from previous works were followed, but it does not explicitly provide percentages, sample counts, or specific methodology for how the datasets were split into training, validation, or test sets within the paper. |
| Hardware Specification | Yes | RECAT was implemented in Pytorch, and all experiments are conducted on an NVIDIA RTX4090 GPU. |
| Software Dependencies | No | The paper states, "RECAT was implemented in Pytorch," but does not specify a version number for PyTorch or any other software dependencies. It mentions using the Adam optimizer and Xavier initialization but without associated versioned libraries. |
| Experiment Setup | Yes | During the training process, we initialized all the parameters in the CDM with Xavier (Glorot and Bengio 2010) initialization and used the Adam (Kingma and Ba 2014) optimizer with a fixed batch size of 256. We set the dimensions of latent features for both examinees and exercises to be equal to the number of knowledge concepts. During the testing process, we set the maximum test length T = 10. The number of question generators is set to N = 5. We set k = 1 as the number of real question with the minimum KL divergence for each generated question. |