Counterfactual Task-augmented Meta-learning for Cold-start Sequential Recommendation

Authors: Zhiqiang Wang, Jiayi Pan, Xingwang Zhao, Jianqing Liang, Chenjiao Feng, Kaixuan Yao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our method significantly outperforms existing state-of-the-art techniques, achieving superior results in cold-start sequential recommendation tasks. ... Experiments Datasets and Experimental Setup Evaluation Metrics Comparison Methods Experimental Results Ablation Study.
Researcher Affiliation Academia 1Shanxi Taihang Laboratory, School of Computer and Information Technology, Shanxi University, Taiyuan, China 2College of Applied Mathematics, Shanxi University of Finance and Economics, Taiyuan, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Meta-Training of CTM
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide any links to a code repository.
Open Datasets Yes Datasets. To evaluate the effectiveness of model, experiments were run on three benchmark datasets, chosen for their diversity and relevance. Dataset statistics are in Table 1. Electronics. This dataset is a subset of the Amazon Review Data (He and Mc Auley 2016)... Movie Lens 100k. A well-known dataset in the recommendation systems domain (Harper and Konstan 2015)... Kuai Rec. A publicly available dataset for recommendation research (Gao et al. 2022)...
Dataset Splits No The paper mentions that "Each task Ti involves N users, where each user selects s1 behavior sequences for the support set Dactual S and s2 sequences for the query set Dactual Q" and analyzes the "Initial Number of Interactions (k)". However, it does not provide specific train/test/validation percentages or absolute counts for the overall datasets, nor does it refer to standard predefined splits for the benchmark datasets used in the main experimental evaluation.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries).
Experiment Setup Yes During the inner-loop optimization phase, the support set DS is used to update Θ via stochastic gradient descent, aiming to minimize the total loss: Θ = Θ α ΘL(f(ϕ,Θ)), where α is the learning rate. ... Ltotal = λLDactual Q + (1 λ)LDcf Q ... Algorithm 1: Meta-Training of CTM Input: Distribution over tasks p(T), step size hyperparameters α, β, parameters ϕ, Θ. ... Parameter Analysis Counterfactual Training Intervention Degree. This analysis examines the impact of the counterfactual training intervention degree, denoted by σ... Initial Number of Interactions. We investigate the influence of the initial number of interactions, denoted by k...