Personalized Negative Reservoir for Incremental Learning in Recommender Systems

Authors: Antonios Valkanas, Yuening Wang, Yingxue Zhang, Mark Coates

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section empirically evaluates the proposed method Graph SANE. Our discussion is centered on the following research questions (RQ). Experiment code and method implementation are available online.
Researcher Affiliation Collaboration Antonios Valkanas EMAIL Mc Gill University, Mila, ILLS Yuening Wang EMAIL Huawei Noah s Ark Lab Yingxue Zhang EMAIL Huawei Noah s Ark Lab Mark Coates EMAIL Mc Gill University, Mila, ILLS
Pseudocode Yes This section summarizes our algorithm and provides a pseudo-code implementation of the updates of the reservoir at each incremental training block in Alg. 1. Additionally, we also show how the reservoir is used during training to sample negatives in Alg. 2.
Open Source Code Yes Experiment code and method implementation are available online1: 1Link to code repository: https://github.com/Anton Valk/Graph SANE
Open Datasets Yes We empirically evaluate our proposed method on six mainstream recommender system datasets: Gowalla, Yelp, Taobao-14, Taobao-15, Netflix and Movie Lens10M. These datasets vary significantly in the total number of interactions, sparsity, average item and user node degrees, as well as the time span they cover. Detailed dataset statistics are provided in Appendix C Tab. 4.
Dataset Splits Yes To simulate an incremental learning setting, the datasets are separated to 60% base blocks and four incremental blocks each with 10% of the remaining data in chronological order. For additional information on how the blocks are constructed see Appendix J. Fig. 5 depicts the data split per block.
Hardware Specification No No specific hardware details like GPU/CPU models, memory, or processing units were mentioned in the paper.
Software Dependencies No Our method is implemented in Tensor Flow. The backbone graph neural network is the MGCCF (Sun et al., 2019) trained using the hyperparameters shown in Table 11. Incremental learning methods are not used during the base block training so the loss during the base block is only LBPR, i.e., (no LKD, LSANE, LKL components).
Experiment Setup Yes Our method is implemented in Tensor Flow. The backbone graph neural network is the MGCCF (Sun et al., 2019) trained using the hyperparameters shown in Table 11. Incremental learning methods are not used during the base block training so the loss during the base block is only LBPR, i.e., (no LKD, LSANE, LKL components).