GenAL: Generative Agent for Adaptive Learning

Authors: Rui Lv, Qi Liu, Weibo Gao, Haotian Zhang, Junyu Lu, Linbo Zhu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated our approach on three real-world datasets, and the experimental results demonstrate that our Gen AL not only consistently outperforms all baselines but also exhibits strong generalization ability.
Researcher Affiliation Academia 1State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China 2Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
Pseudocode No The paper describes the framework components (Global Thinking Agent, Local Teaching Agent) and their sub-modules, along with mathematical formulations, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/karin0018/Gen AL.
Open Datasets Yes Our experiments are performed on three real-world datasets: Junyi1 and ASSIST092 and we collect a dataset with question text content details from the real-world scenarios, note as Text Log. ... 1https://pslcdatashop.web.cmu.edu/Dataset Info?dataset Id=1198 2https://sites.google.com/site/assistmentsdata/home/
Dataset Splits Yes The dataset divided method is following (Liu et al. 2019). In particular, our Gen AL uses the training dataset to train the simulator and initial the learner s profile. Then we use the test dataset to achieve inference.
Hardware Specification No The paper mentions using LLM-based models (Llama2-7B, Llama3-8B, GPT-3.5-turbo) but does not specify the hardware (e.g., GPU, CPU models, memory) on which these models were run or experiments were conducted.
Software Dependencies No The paper mentions employing specific LLM-based models (Llama2-7B, Llama3-8B, and GPT-3.5-turbo) and setting a temperature parameter to 0.9, but it does not provide specific version numbers for any software libraries, programming languages, or frameworks used for implementation.
Experiment Setup Yes In our framework, we employ three LLM-based models for testing: Llama2-7B, Llama3-8B, and the GPT-3.5-turbo provided by Open AI. The temperature parameter is set to 0.9. ... The dataset divided method is following (Liu et al. 2019). ... The learning step is 20. ... when the learning steps are set to 5