Revealing an Overlooked Challenge in Class-Incremental Graph Learning

Authors: Daiqing Qi, Handong Zhao, Xiaowei Jia, Sheng Li

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of our model over baseline models and its effectiveness in different cases with different levels of neighbor information available. ... Table 1: Accuracy on different datasets under class-incremental learning scenario. ... Table 2: Accuracy on Citeseer dataset at different levels of neighborhood information. ... Table 7: Ablation study results.
Researcher Affiliation Collaboration Daiqing Qi EMAIL University of Virginia Charlottesville, VA 22903 Handong Zhao EMAIL Adobe Research San Jose, CA 95110 Xiaowei Jia EMAIL University of Pittsburgh Pittsburgh, PA 15260 Sheng Li EMAIL University of Virginia Charlottesville, VA 22903
Pseudocode No The paper describes the methodology with equations (e.g., Equation 1-12) and textual descriptions of modules (Graph CVAE, Node AE), but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository or mention code in supplementary materials.
Open Datasets Yes Following related works (Wang et al., 2022; Liu et al., 2021; Zhou & Cao, 2021), we conduct experiments on three benchmark datasets under the continual learning settings: Cora (Sen et al., 2008), Cite Seer (Sen et al., 2008), and Amazon (Mc Auley et al., 2015), which is a segment of Amazon co-purchasing graph.
Dataset Splits Yes In our class-incremental setting, we divide Cora into three tasks. The first and second tasks consist of two classes, and the last task has three classes. Citeseer are split into three tasks with two classes in each task. The first task of Amazon has two classes. The second and third tasks on Amazon dataset have three classes. ... Because we randomly split the data into training and test sets and also detach the test nodes from the original graph, we cannot control the level of neighborhood information among test nodes with the random partition process.
Hardware Specification No The paper does not mention any specific hardware components (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions "SGD optimizer is used" but does not specify any software libraries, frameworks, or programming languages with their version numbers.
Experiment Setup Yes SGD optimizer is used and the initial learning rate is set to 0.01 for Cora and Citeseer and 0.005 for Amazon. The batch size it set to 128 and run 100 epochs for each dataset.