Robust Graph Based Social Recommendation Through Contrastive Multi-View Learning
Authors: Fei Xiong, Tao Zhang, Shirui Pan, Guixun Luo, Liang Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on three real-world datasets demonstrate the superior performance of RGCML compared to several state-of-the-art (SOTA) baselines. We have conducted extensive experiments on three real-world datasets to confirm the improvements achieved by RGCML compared to several SOTA models. Ablation Study |
| Researcher Affiliation | Academia | 1School of Electronic and Information Engineering, Beijing Jiaotong University 2School of Information and Communication Technology, Griffith University 3School of Computer Science and Technology, Beijing Jiaotong University 4School of Computer Science, Northwestern Polytechnical University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using mathematical equations and textual explanations but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Three real-world datasets: Douban, Ciao and Yelp are used in our experiments to evaluate RGCML. |
| Dataset Splits | Yes | We split the interaction records into training, validation, and testing set with a ratio of 7:1:2. |
| Hardware Specification | Yes | We use NVIDIA RTX 4090 to accomplish the experiment. |
| Software Dependencies | No | The paper mentions using the Adam optimizer (Kingma and Ba 2014) but does not specify versions for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | As for the general settings of all methods, we empirically set the embedding size D to 64. The models are initialized by Xavier method and optimized by the Adam optimizer (Kingma and Ba 2014). For graph-based models, the number of propagation layers is fixed at 2. As for the RGCML-specific parameters, the number of global intent prototypes K is selected from the range of [100,300,500,700,1000], α is selected from [0.5, 0.6, 0.7, 0.8, 0.9, 0.99], λ1/λ2 are tuned from [0.01,0.05,0.1,0.5], and temperature coefficients τ1/τ2 are tuned from [0.05,0.1,0.2]. |