Intent Oriented Contrastive Learning for Sequential Recommendation
Authors: Wuhong Wang, Jianhui Ma, Yuren Zhang, Kai Zhang, Junzhe Jiang, Yihui Yang, Yacong Zhou, Zheng Zhang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method has been extensively evaluated on four public datasets, demonstrating superior effectiveness. Extensive experiments on four public datasets demonstrate that our approach effectively models user intent and improves recommendation performance. We conduct experiments on four public datasets. We evaluate our method against all baseline models across various datasets, with the results presented in Table 2. We conduct ablation experiments. We conduct a case study to evaluate the effectiveness of our method in modeling user intents. |
| Researcher Affiliation | Academia | Wuhong Wang, Jianhui Ma*, Yuren Zhang, Kai Zhang, Junzhe Jiang, Yihui Yang, Yacong Zhou, Zheng Zhang State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods using mathematical equations and structured steps within paragraphs, but it does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper states: "All baseline models are implemented based on public resources or codes provided by the respective authors. Our method is implemented in Py Torch." However, it does not explicitly state that the authors' own implementation code for IOCLRec is publicly available or provide a link to a repository. |
| Open Datasets | Yes | We conduct experiments on four public datasets. Sports, Beauty and Toys are three subcategories of Amazon review data introduced in (Mc Auley et al. 2015). MovieLens-1M (Harper and Konstan 2015) is a dataset containing users behavior logs on movies, denoted as ML-1M. |
| Dataset Splits | No | The paper describes preprocessing steps: "Following (Chen et al. 2022; Xie et al. 2022), we only retain the 5-core datasets, where each user and item has at least 5 interactions." It also mentions the goal of predicting the next item, implying a common sequential recommendation split, but it does not explicitly state the train/validation/test split percentages, sample counts, or the detailed methodology used for partitioning the data. |
| Hardware Specification | Yes | All experiments are conducted on a single Tesla V100 GPU. |
| Software Dependencies | No | Our method is implemented in Py Torch. The paper mentions a software framework (PyTorch) but does not provide any specific version numbers for it or any other key software dependencies. |
| Experiment Setup | Yes | We set the number of self-attention blocks and attention heads to 2, the embedding dimension to 64. The batch size is set to 256. We use the Adam optimizer (Kingma and Ba 2014) with a learning rate of 0.001. Each data operator s sampling ratio (i.e., η, γ, and δ) is varied within the range [0.1, 0.9] (stepping by 0.1). Parameters ϵ and ω range from 0.5 to 1 (stepping by 0.1). Parameters λ, α, β and the dropout rate are set within the range {0.1, 0.2, 0.3, 0.4, 0.5}. The number of clusters is chosen from {64, 128, 256, 512, 1024}. The values for k1, k2, g, l, r and m are set to 2, 5, 10, 3, 20 and 50, respectively. |