Meta Continual Learning on Graphs with Experience Replay

Authors: Altay Unal, Abdullah Akgül, Melih Kandemir, Gozde Unal

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We introduce Meta CLGraph, which outperforms the baseline methods over various graph datasets including Citeseer, Corafull, Arxiv, and Reddit. This method illustrates the potential of combining replay buffer and meta learning in the field of continual learning on graphs. 5 Experiments The continual learning experimental setups for the GNNs are investigated, and the performance of our setup is compared with the other continual learning setups.
Researcher Affiliation Academia Altay Unal EMAIL Department of Computer Engineering Istanbul Technical University Abdullah Akgül EMAIL Department of Mathematics and Computer Science University of Southern Denmark Melih Kandemir EMAIL Department of Mathematics and Computer Science University of Southern Denmark Gozde Unal EMAIL Department of Computer Engineering Istanbul Technical University
Pseudocode Yes Algorithm 1 ER-GNN for t = 1 to M do Calculate loss with Tt for t = 1 to t 1 do Get Tt from B Calculate loss with Tt end for Sum Losses from Tt & B Extend B with Tt samples end for Algorithm 2 LA-MAML for t = 1 to M do Calculate weights for Tt Meta loss with calculated weights Update learning rates end for Algorithm 3 Meta CLGraph for t = 1 to M do Gaux B Tt Calculate weights with Tt Meta loss with calculated weights on Gaux Update learning rates Extend B with Tt samples end for Figure 1: The algorithms for ER-GNN, LA-MAML, and Meta CLGraph (our approach).
Open Source Code Yes We provide a reference implementation of the proposed model and the experimental pipeline 1. 1https://github.com/ituvisionlab/Meta CLGraph.
Open Datasets Yes For evaluating Meta CLGraph, four benchmark datasets were employed: Corafull (Bojchevski & Günnemann, 2017), Arxiv (Hu et al., 2021), Reddit (Hamilton et al., 2017), and Citeseer (Sen et al., 2008).
Dataset Splits No For each task Ti, we have a training node set Dtr i and a test node set Dtst i . Node classification aims to predict the right class for each node, i.e., to classify each node in the test node set Dtst i into the correct class by learning the tasks using Dtr i . In our graph continual learning setup, we aim to classify incoming nodes based on early observed classes which is also known as class incremental learning (Masana et al., 2022).
Hardware Specification Yes All experiments are repeated 5 times on one Nvidia RTX A4000 GPU.
Software Dependencies No The paper mentions 'Adam optimizer (Kingma & Ba, 2014)' and 'graph convolutional network as the backbone GNN architecture (Kipf & Welling, 2016)' but does not provide specific version numbers for any software libraries or dependencies used for implementation.
Experiment Setup Yes The experiments are conducted with a learning rate of 0.005, and each task is trained for 200 epochs. The batch size is selected as 2000 for the batched datasets. Adam optimizer (Kingma & Ba, 2014) is selected as the optimizer. All methods use the graph convolutional network as the backbone GNN architecture (Kipf & Welling, 2016). The selection algorithm relies on coverage maximization. The hyperparameters concerning the compared methods are obtained from the benchmark paper (Zhang et al., 2022) and its repository, as the results in the benchmark are reproducible. The buffer budget is selected as 10 for the replay based methods, namely the ER-GNN and the Meta CLGraph.