Preference Identification by Interaction Overlap for Bundle Recommendation
Authors: Fei-Yao Liang, Wu-Dong Xi, Xing-Xing Xing, Wei Wan, Chang-Dong Wang, Hui-Yu Zhou
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed PIIO model is applied to two bundle recommendation datasets, and experiments demonstrate the effectiveness of the PIIO model, surpassing state-of-the-art models. Extensive experiments conducted on two public datasets demonstrate that the PIIO model outperforms state-of-the-art methods across multiple evaluation metrics. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2UX Center, Net Ease Games, Guangzhou, China 3Guangxi Key Laboratory of Digital Infrastructure, Guangxi Zhuang Autonomous Region Information Center, Nanning, China 4Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, China 5Guangdong Hengdian Information Technology Co., Ltd., Guangzhou, China |
| Pseudocode | No | The paper describes the methodology using text and mathematical equations, and presents an overall framework diagram in Figure 1, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about making the source code available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Building upon previous work [Chang et al., 2020; Ma et al., 2022; Zhao et al., 2022], we conduct extensive experiments on two datasets, Youshu and Net Ease. These datasets include user-bundle historical interactions, user-item historical interactions and bundle-item inclusions. The two datasets exhibit distinct statistical characteristics due to the different application domains. In addition, they vary in size and sparsity, particularly in the average number of items per bundle within each dataset. |
| Dataset Splits | No | The paper mentions using a 'test set' for evaluation and discusses data augmentation for users with fewer interactions, but it does not explicitly provide the training, validation, or test dataset splits (e.g., specific percentages or sample counts). |
| Hardware Specification | Yes | Our model is trained end-to-end on an NVIDIA Ge Force RTX 3090 GPU without any pre-training. |
| Software Dependencies | No | The paper mentions using Xavier initialization and the Adam optimizer, but it does not specify any software dependencies with their version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | For the general parameters across both datasets, we set the embedding size to 64 and the learning rate to 0.005. For the Youshu dataset, we set the batch size to 1024 and the autoencoder s hidden size to 256. For the Net Ease dataset, we set the batch size to 2048 and the autoencoder s hidden size to 128. In the Data Augmentation module... tune t in {0.1, 0.2, 0.3, 0.4, 0.5}. In the preference aggregation module... tune k in {20, 30, 40, 50, 60}. |