AD4CD: Causal-Guided Anomaly Detection for Enhancing Cognitive Diagnosis
Authors: Haiping Ma, Yue Yao, Changqian Wang, Siyu Song, Yong Yang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate that AD4CD effectively captures anomalous data in the diagnostic process across three realworld datasets, enhancing the accuracy of the diagnostic results. Introduction Amid the development of intelligent education, cognitive diagnosis has garnered increasing attention (Zhao et al. 2023; Wu et al. 2023; Liu et al. 2023b; Yang et al. 2023, 2019). Cognitive diagnosis aims to assess students proficiency in various knowledge concepts through response on |
| Researcher Affiliation | Academia | Haiping Ma1,2*, Yue Yao1, Changqian Wang1, Siyu Song1, Yong Yang1 1Institutes of Physical Science and Information Technology, Anhui University, China 2Department of Information Materials and Intelligent Sensing Laboratory of Anhui Province, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods using mathematical equations and prose but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/BIMK/Intelligent-Education/ |
| Open Datasets | Yes | We conducted experiments on three real-world datasets: ASSISTments09, ASSISTments17, and Junyi. These datasets all include a field for students response times. ASSISTments (Feng, Heffernan, and Koedinger 2009) is a publicly available dataset collected from the online tutoring system ASSISTments, while Junyi (Chang et al. 2015) is from the online learning platform Junyi Academy, established in 2012. |
| Dataset Splits | No | The paper mentions filtering criteria for students (fewer than 10 response logs) but does not specify how the datasets were split into training, validation, and test sets. For these three datasets, we filtered out students with fewer than 10 response logs to ensure sufficient data for model training. |
| Hardware Specification | Yes | All models were implemented in Pytorch, and all experiments were conducted on a Linux server with an RTX4090. |
| Software Dependencies | No | The paper mentions 'All models were implemented in Pytorch' but does not provide a specific version number for Pytorch or any other software dependencies. |
| Experiment Setup | Yes | The loss function ratios was set to 1, 0.01 and 1. The hyperparameters of the comparison methods were tuned on the validation set according to the original papers. |