DisCo: Graph-Based Disentangled Contrastive Learning for Cold-Start Cross-Domain Recommendation

Authors: Hourun Li, Yifan Wang, Zhiping Xiao, Jia Yang, Changling Zhou, Ming Zhang, Wei Ju

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four benchmark CDR datasets demonstrate that Dis Co consistently outperforms existing state-of-the-art baselines, thereby validating the effectiveness of both Dis Co and its components.
Researcher Affiliation Academia 1 State Key Laboratory for Multimedia Information Processing, School of Computer Science, PKU-Anker LLM Lab, Peking University, Beijing, China 2 Computer Center, Peking University, Beijing, China 3 School of Information Technology & Management, University of International Business and Economics, Beijing, China 4 Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA 5 College of Computer Science, Sichuan University, Chengdu, China EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the proposed framework and its components using mathematical formulations and textual descriptions, but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The code is released on https://github.com/Hourun Li/2025-AAAI-Dis Co
Open Datasets Yes We experiment on four domain pairs from the public Amazon dataset 1, namely music-movie, phoneelectronic, cloth-sport, and game-video. ... 1http://jmcauley.ucsd.edu/data/amazon/index 2014.html
Dataset Splits Yes Following prior works (Cao et al. 2022b, 2023), we randomly select 20% of overlapping users (i.e., those observed in both source and target domains) and treat them as cold-start users by removing their target domain interactions during testing and validation, using the remaining users for training.
Hardware Specification No The paper describes experimental parameters and tuning methodologies but does not specify any particular hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., programming language versions, library versions, or solver versions) used to replicate the experiment.
Experiment Setup Yes In our experiments, we set the embedding dimension to 128, the batch size to 1,024, and the slope of Leaky ReLU to 0.05. In specific, we tune the number of the graph encoder layer L in the range of [1, 6], the number of latent factors K in the range of [1, 6], and the values of β and λ of the objective function in the range of [0, 0.5]. Additionally, we turn the dropout rate in the range [0, 0.5], and the learning rate in the range of [0, 0.005].