Feature-Structure Adaptive Completion Graph Neural Network for Cold-start Recommendation

Authors: Songyuan Lei, Xinglong Chang, Zhizhi Yu, Dongxiao He, Cuiying Huo, Jianrong Wang, Di Jin

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on multiple public datasets demonstrate significant improvements in our proposed FS-GNN in cold-start scenarios, outperforming or being competitive with state-of-the-art methods. In this section, we conduct extensive experiments on three public recommendation datasets to validate the effectiveness of our proposed FS-GNN in various cold-start recommendation scenarios.
Researcher Affiliation Collaboration Songyuan Lei1 , Xinglong Chang1,2 , Zhizhi Yu1*, Dongxiao He1, Cuiying Huo1, Jianrong Wang1, Di Jin1* 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Qijia Youdao Network Technology (Beijing) Co., Ltd., Beijing, China
Pseudocode No The paper describes methods using mathematical equations and descriptive text but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks with structured steps.
Open Source Code No The paper does not contain an explicit statement about releasing source code or provide a link to a code repository for the methodology described.
Open Datasets Yes We select three widely used public benchmark datasets, namely Movie Lens100K, Movie Lens1M1, and Yelp2, to verify the effectiveness of our proposed FS-GNN. 1https://grouplens.org/datasets/movielens/ 2https://www.yelp.com/dataset/challenge
Dataset Splits Yes For each dataset, we divide it into training set, validation set and testing set with 80%, 10% and 10% respectively.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for conducting experiments.
Software Dependencies No The paper mentions using LLMs like gpt-3.5-turbo and text-embedding-ada-002, and also refers to GAT, GCN, and Light GCN as model components, but it does not specify version numbers for these software components or any other programming languages/libraries.
Experiment Setup Yes For our proposed FS-GNN, for feature completion module, when we use LLM to complete the user features, we utilize up to 30 historical interactions to ensure that the token does not exceed the limit. We use GAT as encoder and GCN as decoder with {64, 128} units in the hidden layer, and set the pdrop to 0.3. For structure completion module, we set the top k for both k NN-based and PPRbased structure completion strategies in {10, 15, . . . , 25}. We set the weighted loss coefficient λ for the feature completion module to 0.5, and the weighted loss coefficient µ for the structure completion module to 0.5, the learning rate to 0.005 and weight decay to 0.0005. We set the embedding dimension to 64 for all compared methods. We use Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) as evaluation metrics