Core Knowledge Learning Framework for Graph
Authors: Bowen Zhang, Zhichao Huang, Guangning Xu, Xiaomao Fan, Mingyan Xiao, Genan Dai, Hu Huang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate significant enhancements achieved by our method compared to state-of-the-art approaches. Specifically, our method achieves notable improvements in accuracy and generalization across various datasets and evaluation metrics, underscoring its effectiveness in addressing the challenges of graph classification. |
| Researcher Affiliation | Academia | 1Shenzhen Technology University 2Beijing Normal University, Zhuhai 3Hong Kong Baptist University 4California State Polytechnic University 5University of Science and Technology of China |
| Pseudocode | Yes | The details of updating Θ and Φ are shown in Algorithm 1. The core knowledge learning is shown in lines 2-3, the graph domain adaptation task is shown in lines 6-9, and the few-shot learning is shown in lines 12-22. |
| Open Source Code | No | The paper does not contain any explicit statement about open-sourcing the code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | For the graph domain adaptation task, we utilize 9 graph classification datasets for evaluation, i.e., Mutagenicity (M) (Kazius, Mc Guire, and Bursi 2005), Tox21 Ah R, FRANKENSTEIN (F) (Orsini, Frasconi, and De Raedt 2015), and PROTEINS (Dobson and Doig 2003) (including PROTEINS (P) and DD (D)), COX2 (Sutherland, O brien, and Weaver 2003) (including COX2 (C) and COX2 MD (CM)), BZR (Sutherland, O brien, and Weaver 2003) (including BZR (B) and BZR MD (BM)) obtained from TUDataset (Morris et al. 2020). |
| Dataset Splits | No | The paper mentions partitioning datasets into sub-datasets based on edge density and using one sub-dataset as source and others as target. It also states calculating ROC-AUC scores by running ten times experiments. However, it does not provide specific train/test/validation split percentages, sample counts, or explicit references to standard splits for these partitions that would allow for exact reproduction of data partitioning. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory used for running its experiments. |
| Software Dependencies | No | The paper mentions using GIN as the backbone and RDKit to obtain molecular graphs, but it does not specify version numbers for these or any other key software components (e.g., Python, PyTorch, TensorFlow, CUDA). |
| Experiment Setup | Yes | In our CKL, we employ GIN (Xu et al. 2019) as the backbone of feature extraction. For the graph domain adaptation task, we utilize one of the subdatasets as source data and the remaining as the target data for performance comparison. We set the hidden size to 128 and the learning rate to 0.001 as default. We report the classification accuracy in the experiments. For the few-shot learning task, we use RDKit (Landrum et al. 2013) to obtain the molecular graphs, node and edge features. We use the GIN (Xu et al. 2019) as the backbone for feature extraction. We calculate the mean and standard deviations of ROC-AUC scores on each task by running ten times experiments. |