Attribute Association Driven Multi-Task Learning for Session-based Recommendation
Authors: Xinyao Wang, Zhizhi Yu, Dongxiao He, Liang Yang, Jianguo Wei, Di Jin
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three public datasets demonstrate the superiority of our method in recommendation accuracy (P@20) and ranking quality (MRR@20), validating the model s effectiveness. |
| Researcher Affiliation | Academia | 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2School of Artifcial Intelligence, Hebei University of Technology, Tianjin, China 3Key Laboratory of Artificial Intelligence Application Technology, Qinghai Minzu University, Xining, 810007, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes the proposed A2D-MTL method, its components, and mathematical formulations, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Following recent studies on session-based recommendation systems [Hou et al., 2022; Zhang et al., 2023b; Wang et al., 2024b], three widely-used public benchmark datasets are adopted in our work: Diginetica, Tmall, and Yoochoose1 64. Diginetica1: A dataset of anonymous user transaction records from an e-commerce search engine s logs over five months, provided by the CIKM Cup 2016. Tmall2: A dataset of anonymized shopping logs from the Tmall platform, released for the IJCAI15 competition. Yoochoose1 643: A dataset of user click events on an e-commerce platform, created for the Rec Sys Challenge 2015, using the latest 1/64 portion of training sessions . 1http://cikm2016.cs.iupui.edu/cikm-cup 2https://tianchi.aliyun.com/dataset/42 3http://2015.recsyschallenge.com/challege |
| Dataset Splits | Yes | Following the approach in [Wu et al., 2019; Wang et al., 2020], we preprocess the three datasets. Specifically, the most recent week s historical sessions are used as the test set, while the remaining sessions are used as the training set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Following [Wu et al., 2019; Wang et al., 2020], the dimension of the latent vectors is fixed to 256, and the batch size is set to 100. We use the Adam optimizer with the initial learning rate of 0.001, which decays by 0.8 after every 3 epochs. The l2 penalty is set to 10 5. The regularization parameter λ in Eq.(25) balances the item prediction loss and category prediction loss in A2D-MTL. We evaluate the performance of A2D-MTL under different λ values {0.01, 0.1, 0.2, 0.23, 0.25, 0.27, 0.29, 0.3, 0.4, 0.5}. |