Decision-Aware Preference Modeling for Multi-Behavior Recommendation

Authors: Qingfeng Li, Wei Liu, Zaiqiao Meng, Jian Yin

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three real-world datasets demonstrate the consistent improvements achieved by DAPM over thirteen state-of-the-art baselines. We release our code at https://github.com/Breeze-del/DAPM. Comprehensive experiments on three real-world datasets demonstrate that our DAPM outperforms the state-of-the-art approaches in multi-behavior scenarios. Further experimental results verify the rationality and effectiveness of the designed sub-modules.
Researcher Affiliation Academia 1School of Artifcial Intelligence, Sun Yat-sen University, Zhuhai, China 2The Technology Innovation Center for Collaborative Applications of Natural Resources Data in GBA, MNR EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes its methodology using mathematical equations and descriptions of components but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Extensive experiments on three real-world datasets demonstrate the consistent improvements achieved by DAPM over thirteen state-of-the-art baselines. We release our code at https://github.com/Breeze-del/DAPM.
Open Datasets Yes To evaluate the effectiveness of the proposed DAPM, we conduct extensive experiments on three public multi-behavior datasets, including Beibei, Taobao and Tmall.
Dataset Splits No The paper states: "For the three datasets, to eliminate duplicate data, we follow the previous works [Cheng et al., 2023; Yan et al., 2023; Meng et al., 2023a] to retain only the earliest occurrence of each interaction." and provides a table with dataset statistics, but it does not explicitly specify the training, validation, or test splits used for the experiments.
Hardware Specification Yes All methods are trained on a single NVIDIA Ge Force GTX 3090 GPU with the same hidden state dimensionality setting to ensure a fair comparison of efficiency.
Software Dependencies No The paper mentions that parameters are "optimized by Adam" and refers to various baseline models, but it does not specify version numbers for any software libraries or dependencies (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For all methods, we uniformly set the batch size to 1024 and the embedding size to 64 during the training phase. The parameters are optimized by Adam, while the learning rate is set to 1e-3. We adjust the behavior coefficients for each behavior in [0, 1/6, 2/6, 3/6, 4/6, 5/6, 1]. To determine the optimal values for the hyperparameters, including c and beta, we perform a grid search on the set [1e-2, 1e-1, 3e-1, 5e-1, 7e-1, 1, 10, 100].