A Selective Learning Method for Temporal Graph Continual Learning
Authors: Hanmo Liu, Shimin Di, Haoyang Li, Xun Jian, Yue Wang, Lei Chen
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three real-world datasets validate the effectiveness of LTF on TGCL. We conduct extensive experiments on real-world web data, Yelp, Reddit, and Amazon. The overall comparison of LTF with other baselines is shown in Tab.2, with performance trends in Fig.3 and Fig. 4. In Tab.3, we evaluate the impact of each LTF component. |
| Researcher Affiliation | Academia | 1Hong Kong University of Science and Technology, China 2Hong Kong University of Science and Technology (Guangzhou), China 3Southeast University, China 4Hong Kong Polytechnic University, China 5Northwestern Polytechnical University, China 6Shenzhen Institute of Computing Sciences, China. Correspondence to: Shimin DI <EMAIL>. |
| Pseudocode | Yes | The pseudo code of the Learning Towards the Future (LTF) method is presented in Algorithm 1 at Appendix D. |
| Open Source Code | Yes | Our code and data are available at https://github. com/liuhanmo321/TGCL_LTF.git. |
| Open Datasets | Yes | We evaluate our method using three real-world datasets: Yelp (dat), Reddit (Baumgartner et al., 2020), and Amazon (Ni et al., 2019). URL https://www.yelp.com/dataset. |
| Dataset Splits | Yes | For all data sets, each period is split into 80% training, 10% validation, and 10% test. The testing data are not seen in training and validation. |
| Hardware Specification | Yes | The experiments are run on Nvidia A30 GPU. |
| Software Dependencies | No | The implementation of backbone models follow the code provided by Dy GLib (Yu et al., 2023). The searching is performed on hyperopt package with 10 iterations. The paper mentions software and tools but does not provide specific version numbers for any key software components or libraries. |
| Experiment Setup | Yes | For all data sets, dropout rate is 0.4, learning rate is 0.00001, training epochs for each period is 100, and batch size is 600. Early stop is applied when validate AP does not improve for 20 epochs. For the selection based methods, 1000 events are selected for each class data at each period of Reddit and Amazon, and 500 for Yelp. Additionally, for LTF, the size of Gsim N is set to 500 for all data sets, and data are partitioned to have around 10000 samples in each part. The reported results are averaged over 3 random seeds. where β is the hyper-parameter weighting the distribution regularization importance. where α is a weight hyper-parameter and m is the memory budget for Gsub N . |