DOGE: LLMs-Enhanced Hyper-Knowledge Graph Recommender for Multimodal Recommendation

Authors: Fanshen Meng, Zhenhua Meng, Ru Jin, Rongheng Lin, Budan Wu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experimentation across three public realworld datasets illustrates that DOGE attains state-of-the-art (SOTA) performance, exhibiting a 7.2% improvement over the strongest baseline. To evaluate the effectiveness of our proposed model, we perform extensive experiments on real-world Amazon datasets (Mc Auley et al. 2015).
Researcher Affiliation Academia Fanshen Meng, Zhenhua Meng, Ru Jin, Rongheng Lin *, Budan Wu State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications EMAIL
Pseudocode No The paper describes the methodology using mathematical equations and textual explanations, such as "Modality Enhancement Method Based on LLMs" and "Constructing Hyper-Knowledge Graph", but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper states: "We implement our proposed model using Py Torch within the MMRec (Zhou 2023) framework", which indicates using an existing framework but does not explicitly state that the authors' specific implementation code for DOGE is open-source or publicly available.
Open Datasets Yes To evaluate the effectiveness of our proposed model, we perform extensive experiments on real-world Amazon datasets (Mc Auley et al. 2015). For a deeper analysis of the model s performance in handling larger datasets with sparser data, we select the Baby, Home and Kitchen, and Electronics datasets which we refer to as Baby, Kitchen and Electronics.
Dataset Splits Yes Following prior settings, we divide historical interactions into training, validation, and test sets using an 8:1:1 ratio.
Hardware Specification Yes Model training is conducted using an RTX 4090 GPU equipped with 24GB of memory.
Software Dependencies No The paper mentions "Py Torch" and "MMRec (Zhou 2023) framework", and "pre-trained sentence-transformers (Reimers and Gurevych 2019)", but does not provide specific version numbers for these software components.
Experiment Setup Yes We implement our proposed model using Py Torch within the MMRec (Zhou 2023) framework, setting the user and item embedding dimensions for all models to 64. Model parameters are initialized with the Xavier method (Glorot and Bengio 2010), employed the Adam optimizer (Kingma and Ba 2014), and set the batch size to 2048. We construct a hyperparameter grid for learning rate and regularization weight, with values ranging from {1e-1, 1e-2, 1e-3, 1e4} for both, resulting in 16 parameter combinations. We fix the layers of our heterogeneous graph at 2 and the layers of our homogeneous graph at 1. The top-K is set to 40 for the user graph, and for the item graph Gr, K is set to 10. We establish 1000 epochs as the upper limit for training, employing early stopping after 20 epochs, driven by the R@20 measure.