Disentangled Modeling of Preferences and Social Influence for Group Recommendation

Authors: Guangze Ye, Wen Wu, Guoqing Wang, Xi Chen, Hong Zheng, Liang He

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate that our model significantly outperforms state-of-the-art methods on two real-world datasets.
Researcher Affiliation Academia 1Lab of Artificial Intelligence for Education, East China Normal University, Shanghai, China 2Shanghai Institute of Artificial Intelligence for Education, East China Normal University, Shanghai, China 3School of Computer Science and Technology, East China Normal University, Shanghai, China 4Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China 5Shanghai Changning Mental Health Center, Shanghai, China
Pseudocode No The paper describes the model architecture and propagation mechanisms using equations and descriptive text, but it does not contain a clearly labeled pseudocode block or algorithm.
Open Source Code Yes The code for Dis Rec is available at (https://github.com/Dis Rec/Dis Rec).
Open Datasets Yes We conduct experiments on two real-world public datasets: Mafengwo, published by (Cao et al. 2018), and Yelp, published by (Yin et al. 2019).
Dataset Splits No The paper states using real-world public datasets and mentions a training set implicitly through the loss function definition (O represents the training set), but does not explicitly provide percentages, sample counts, or a detailed methodology for how the datasets were split into training, validation, or test sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It only mentions general settings like embedding size, batch size, and learning rate.
Software Dependencies No The paper mentions general settings and hyperparameters but does not provide specific version numbers for any ancillary software dependencies (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes For the general settings, the embedding size is set to 64, the batch size is 512, and the number of negative samples is 10. For the baseline models, we refer to their best parameter setups reported in the original papers. For our model, we set the convolutional layer L to 3, the SSL weight δ to 0.5 and tune the learing rate in [1e 4, 1e 3], and the dropout rate in [0.1, 0.5].