CUGF: A Reliable and Fair Recommendation Framework

Authors: Nitin Bisht, Xiuwen Gong, Guandong Xu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments by applying CUGF on top of various recommendation models and representative datasets to validate its effectiveness with respect to recommendation performance (in terms of average set size) and fairness (in terms of the two defined fairness metrics), the results of which demonstrate the validity of the proposed framework.
Researcher Affiliation Academia 1University of Technology, Sydney 2The Education University of Hong Kong EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: CUGF Algorithm
Open Source Code Yes Code is publicly available at https://github.com/kalpiree/CUGF-RS.
Open Datasets Yes We generate four datasets via two different grouping methods on two publicly available datasets, i.e., Amazon Office (e Commerce) (Mc Auley et al. 2015) and Movie Lens (Movies)(Harper and Konstan 2015), to validate the proposed framework CUGF.
Dataset Splits Yes Moreover, we split the held-out training data into the calibration data (60%), and testing data (40%).
Hardware Specification No No specific hardware details (like CPU/GPU models, memory, or cloud instance types) are provided in the paper.
Software Dependencies No The paper mentions using the Adam optimizer and lists several base recommendation models (Deep FM, Light GCN, GMF, MLP, Neu MF) but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages.
Experiment Setup Yes All the methods are trained using the Adam optimizer. The batch size is set to 256, the learning rate to 0.001 and we train for 20 epochs. Detailed configurations for all base models can be found as follows: GMF uses an embedding size of 8; MLP employs layers of [64, 32, 16] with Re LU activation; Neu MF combines GMF and MLP with a GMF embedding size of 8 and MLP layers of [64, 32, 16], Re LU activation; Deep FM integrates 8 latent factors with deep layers of [50, 25, 10] with Re LU activation; Light GCN uses an embedding size of 8 and 3 layers with Re LU activation.