CoDeR: Counterfactual Demand Reasoning for Sequential Recommendation

Authors: Shuai Tang, Sitao Lin, Jianghong Ma, Xiaofeng Zhang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on three real-world datasets demonstrate that Co De R significantly outperforms existing baselines.
Researcher Affiliation Academia Shuai Tang*, Sitao Lin*, Jianghong Ma , Xiaofeng Zhang Harbin Institute of Technology(Shenzhen), Shenzhen, China EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using prose and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing its own source code or a link to a code repository for the methodology described. While it references a GitHub link for datasets, this is not for the authors' implementation.
Open Datasets Yes In our experiments, we adopt three widely recognized real-world datasets Diginetica, Tmall, and Ta Feng1 to assess the performance of the proposed model. The detailed statistics of datasets are provided in Table 1. 1https://github.com/RUCAIBox/Rec Sys Datasets
Dataset Splits Yes Dataset #Item #Category#Training #Testing Diginetica 43,137 996 103,906 25,976 Tmall 17,095 698 28,004 7,001 Tafeng 14,637 1,638 80,552 20,138
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software names with version numbers (e.g., programming languages, libraries, frameworks) used for replication.
Experiment Setup No Model Loss To train the model, we utilize the cross-entropy loss function, which is formulated as follows: t=1 yt log p(vt |s) + λ1||Θ||2, (25) where S denotes the set of training sequences and yt is the ground truth for the target item. And L2 regularization is applied to mitigate overfitting, where λ1 is a hyper-parameters that controls the strength of the L2 regularization, and Θ is the set of model parameters. ... If η is below a threshold µ set to the mean KL divergence across users, the user is considered stable. Next, if the modularity gain calculated from Eq.20, falls below the threshold γ, after adding candidate items, this indicates a demand shift.