Exploring Rationale Learning for Continual Graph Learning

Authors: Lei Song, Jiaxing Li, Qinghua Si, Shihan Guan, Youyong Kong

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets with varying task lengths demonstrate the effectiveness of our RL-GNN in continuous knowledge assimilation and reduction of catastrophic forgetting. Experiments Next, we empirically investigate the following questions: Q1: Does RL-GNN demonstrate stronger resistance to CF over existing CGL approaches? Q2: Is RL-GNN sensitive to hyperparameters such as α, β and γ? Q3: Can RL-GNN really differentiate between rationales and environments for CGL? Table 1 presents our comparison results on three datasets with varying task lengths. The visualization of the performance matrices and learning dynamics is available in Appendix B.
Researcher Affiliation Academia 1Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, School of Computer Science and Engineering, Southeast University 2Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China EMAIL
Pseudocode Yes Algorithm 1: Rationale Learning for Continual Graph Learning
Open Source Code No The paper does not provide concrete access to source code. It mentions: "As the source code for ER-GS-LS is not publicly available, the values in Table 1 are cited from the publication.", but this refers to a baseline method, not the authors' own work.
Open Datasets Yes To answer the aforementioned questions, we carry out experiments on three real-world datasets: Aromaticity (Xiong et al. 2019), REDDIT-MULTI-12K (Yanardag and Vishwanathan 2015), and ENZYMES (Borgwardt et al. 2005).
Dataset Splits Yes Following (Zhang, Song, and Tao 2022a), we divide each dataset into 2-way graph classification tasks, with each category being stratified into training/validation/testing subsets in accordance with 8/1/1 ratio, yielding three task streams suffixed with -CL for CGL.
Hardware Specification Yes All experiments are implemented on the Py Torch 3.10 framework, with an NVIDIA 3090 GPU.
Software Dependencies No The paper states: "All experiments are implemented on the Py Torch 3.10 framework". This phrasing is ambiguous regarding specific software versions. While "3.10" likely refers to Python 3.10, it's not explicitly stated as "Python 3.10" alongside a specific PyTorch version, failing to meet the criteria of providing specific version numbers for key software components.
Experiment Setup No Moreover, considering that CGL performance is intricately linked to certain hyperparameters such as batch size and training epochs, we maintain consistency across all experiments. Detailed settings are documented in Appendix A.3.