CLLMRec: Contrastive Learning with LLMs-based View Augmentation for Sequential Recommendation

Authors: Fan Lu, Xiaolong Xu, Haolong Xiang, Lianyong Qi, Xiaokang Zhou, Fei Dai, Wanchun Dou

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three public datasets demonstrate that the proposed method outperforms state-of-the-art baseline models, and significantly enhances recommendation performance. The framework was evaluated on three publicly available datasets, and the results demonstrate that it outperforms state-of-the-art models in all scenarios. Additionally, further ablation experiments validate the effectiveness of the LLMs-based view augmentation method and the contrastive learning module.
Researcher Affiliation Academia 1 Nanjing University of Information Science and Technology 2 China University of Petroleum (East China) 3 Kansai University 4 Southwest Forestry University 5 Nanjing University EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes The pseudocode for the overall algorithm is presented in Algorithm 1.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository.
Open Datasets Yes Datasets. The experiments were conducted using three publicly available recommendation system datasets: ML-1M, Beauty, and Steam, which cover multiple domains including movies, cosmetics, and games, making them highly representative.
Dataset Splits No The paper mentions evaluating NDCG@K (full corpus) and discusses a view augmentation rate of 0.75, but it does not specify explicit training/test/validation dataset splits (e.g., percentages or counts) for reproduction.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions using the Adam W optimizer and cross-entropy loss function, but it does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Models are trained using the cross-entropy loss function and the Adam W optimizer, with a batch size set at 128, a learning rate of 1e-3, and a maximum of 2000 training epochs. Validation occurs every 10 epochs during training. The view augmentation rate α is set at 0.75, and the contrastive learning weight wi is set at 0.5. Early stopping is employed when Recall@10 shows no improvement for 20 consecutive validation rounds.