Graph Kernels: A Survey
Authors: Giannis Nikolentzos, Giannis Siglidis, Michalis Vazirgiannis
JAIR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, we perform an experimental evaluation of several of those kernels on publicly available datasets, and provide a comparative study. ... In Section 7, we experimentally evaluate the performance of many graph kernels on several widely-used graph classification benchmark datasets. |
| Researcher Affiliation | Academia | Giannis Nikolentzos EMAIL LIX, Ecole Polytechnique Palaiseau, 91120, France; Ioannis Siglidis EMAIL LIGM, Ecole des Ponts, Universit e Gustave Eiffel, CNRS Marne-la-Vall ee, 77420, France; Michalis Vazirgiannis EMAIL LIX, Ecole Polytechnique Palaiseau, 91120, France. All listed institutions are academic or public research organizations in France, and the email domains also indicate academic affiliations. |
| Pseudocode | No | The paper describes algorithms and procedures (e.g., Geometric Random Walk Kernel, Weisfeiler-Lehman Framework, Neighborhood Hash Kernel) in descriptive text, but it does not include formally structured pseudocode blocks or algorithms with numbered steps or code-like formatting. |
| Open Source Code | No | Specifically, we made use of the Gra Ke L library which contains implementations of a large number of graph kernels (Siglidis et al., 2020). The authors state that they *used* an existing open-source library (GraKeL) for their experimental comparison, rather than releasing their own source code specifically for the methodologies described within this survey paper. |
| Open Datasets | Yes | All datasets are publicly available (Kersting et al., 2016). ... Kersting, K., Kriege, N. M., Morris, C., Mutzel, P., & Neumann, M. (2016). Benchmark data sets for graph kernels.. http://graphkernels.cs.tu-dortmund.de. |
| Dataset Splits | Yes | Therefore, we perform 10-fold cross-validation to obtain an estimate of the generalization performance of each method. For the common datasets, we use the splits (and results) provided by Errica et al. (2020). |
| Hardware Specification | Yes | All experiments were performed on a cluster of 80 Intel Xeon CPU E7 4860 @ 2.27GHz with 1TB RAM. |
| Software Dependencies | No | Specifically, we made use of the Gra Ke L library which contains implementations of a large number of graph kernels (Siglidis et al., 2020). We also employed a Support Vector Machine (SVM) classifier and in particular, the LIB-SVM implementation (Chang & Lin, 2011). While specific libraries (GraKeL, LIB-SVM) are mentioned, their version numbers are not explicitly provided. |
| Experiment Setup | Yes | Within each fold, the parameter C of the SVM and the hyperparameters of the kernels (see below) and GNNs were chosen based on a validation experiment on a single 90% 10% split of the training data. We chose the value of parameter C from {10 7, 10 5, . . . , 105, 107}. Moreover, we normalized all kernel values as follows ˆk(Gi, Gj) = k(Gi,Gj)/√k(Gi,Gi) k(Gj,Gj) for any graphs Gi, Gj. ... The values of the different hyperparameters of the kernels are shown in Table 4. |