DR-VAE: Debiased and Representation-enhanced Variational Autoencoder for Collaborative Recommendation
Authors: Fan Wang, Chaochao Chen, Weiming Liu, Minye Lei, Jintao Chen, Yuwen Liu, Xiaolin Zheng, Jianwei Yin
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide experimental validations over four datasets to substantiate the efficacy of our DR-VAE framework. In this section, we carry out extensive experiments to answer the four main questions: RQ1: Can our DR-VAE effectively learn users true preferences compared to state-of-the-art debiasing baselines? RQ2: Does our R-VAE improve existing VAEs in terms of representational ability? RQ3: What are the effecs of the different components of our proposal? RQ4: How does the performance of DR-VAE vary w.r.t. different values of the hyper-parameters? |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, Zhejiang University, China 2College of Computer Science and Technology, Jilin University, China 3College of Computer Science and Technology, China University of Petroleum (East China), China {fanwang97, zjuccc, 21831010} @zju.edu.cn, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods using mathematical equations and prose, but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. It mentions a GitHub link in a footnote (2https://github.com/Jingsen Zhang/Recbole-Debias/) in the context of semi-synthetic datasets from Recbole-Debias, which is a third-party resource used by the authors, not their own implementation code for DR-VAE. |
| Open Datasets | Yes | We conduct experiments on two groups of datasets: (1) Semi-synthetic datasets from Recbole-Debias2, including ML 100K (Harper and Konstan 2015) and Kuai Rec (Gao et al. 2022), where 50% of the data is biased normal and 50% unbiased intervened. (2) Real-world datasets, including Amazon Toys (Ruining and Julian 2016) and Mod Cloth (Misra, Wan, and Mc Auley 2018), with no intervention. |
| Dataset Splits | Yes | The datasets are split into training, validation, and test sets as per Recbole-Debias. These two datasets are split into 8:1:1 for training, validation, and test. |
| Hardware Specification | Yes | Experiments were conducted on an NVIDIA RTX3090 GPU. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and setting hyperparameters like learning rate and weight decay, but does not specify version numbers for any software libraries, frameworks, or programming languages used (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | We optimized the models using the Adam optimizer with a learning rate of 0.001 and weight decay λ of 0.01. The latent dimension D was set to 300, and the batch size N to 32. For the hyperparameters in DR-VAE, we set β = 0.01 and η = 0.3 for all datasets. |