Towards Unbiased Information Extraction and Adaptation in Cross-Domain Recommendation

Authors: Yibo Wang, Yingchun Jian, Wenhao Yang, Shiyin Lu, Lei Shen, Bing Wang, Xiaoyi Zeng, Lijun Zhang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To verify the effectiveness of our method, we conduct extensive experiments on three real-world datasets, each of which contains three extremely sparse domains. Experimental results demonstrate the considerable superiority of our proposed method compared to baselines. Empirically, we conduct extensive experiments over three real-world datasets: Amazon (Ni, Li, and Mc Auley 2019), Douban (Zhu et al. 2019) and IE datasets. Experimental results demonstrate that UIEA exhibits significant performance improvements over its competitors. We summarize the performance of UIEA and baseline methods on three datasets in Table 2. Overall, UIEA significantly outperforms single-domain and cross-domain baselines (Q1).
Researcher Affiliation Collaboration 1 National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2 School of Artificial Intelligence, Nanjing University, Nanjing 210023, China 3 Alibaba Group, Hangzhou 310052, China
Pseudocode Yes Algorithm 1: The UIEA framework
Open Source Code No The paper does not explicitly provide an open-source code link or an affirmative statement about code release.
Open Datasets Yes Our experiments are conducted on three real-world datasets, each of which includes three domains: Amazon (Books, Movies and Elec) (Ni, Li, and Mc Auley 2019), Douban (Zhu et al. 2019), and IE (BR, KR, US).
Dataset Splits Yes All datasets are randomly divided into training, validation and test sets with the ratio of 7:1:2.
Hardware Specification Yes All experiments are conducted on a single machine equipped with Tesla V100 GPUs.
Software Dependencies No All methods are implemented by the Pytorch framework, and we employ Adam (Kingma and Ba 2014) with default parameters as the optimizer. The paper mentions the Pytorch framework and Adam optimizer but does not specify their version numbers.
Experiment Setup Yes In the pretraining, we set the embedding dimension n = 64 and hyper-parameters λU = λV = 10 5 in the BPR loss (3), and the mini-batch size N = 2048. In the UIE module, we configure the encoder of the embedding generator with layer sizes [3 n, 2 n, n], and the decoder with layer sizes [n, 2 n, 3 n]. Additionally, the hyper-parameter in (8) is set as α = 0.1. In the UIA module, both MLPkey and MLPvalue have layer sizes [2 n, n], and MLPquery has layer sizes [n, n]. The learning rate is chosen from {10 4, , 10 1} for each method.