Transferability of Spectral Graph Convolutional Neural Networks

Authors: Ron Levie, Wei Huang, Lorenzo Bucci, Michael Bronstein, Gitta Kutyniok

JMLR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In figure 2 we showcase transferability under coarsening on the Bunny mesh... In top-left of Figure 4 we isolate principle transferability form concept-based transferability in MNIST, and compare a spectral graph Conv Net method with a spatial graph Conv Net method... In Figure 3 we showcase the transferability formula Thm.4(1) on the Bunny graph of Figure 2... In top-middle and right of Figure 4 we test transferability between the Citeseer graph M and its coarsened version G... In Figure 4 bottom, we test the stability of spectral graph filters in the Cora graph with the normalized Laplacian, for different models of graph perturbations and sub-sampling.
Researcher Affiliation Academia Ron Levie EMAIL Department of Mathematics Ludwig-Maximilians-Universit at M unchen 80333 M unchen, Germany Wei Huang EMAIL Institute of Computational Science Universit a della Svizzera italiana 6900 Lugano, Switzerland Lorenzo Bucci EMAIL Institute of Computational Science Universit a della Svizzera italiana 6900 Lugano, Switzerland Michael Bronstein EMAIL Department of Computing Imperial College London London SW7 2BU, United Kingdom Gitta Kutyniok EMAIL Department of Mathematics Ludwig-Maximilians-Universit at M unchen 80333 M unchen, Germany
Pseudocode No The paper primarily presents mathematical derivations, theorems, proofs, and descriptions of methodologies in natural language. It does not include any clearly labeled pseudocode blocks or algorithms in a structured, code-like format.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide links to code repositories.
Open Datasets Yes In figure 2 we showcase transferability under coarsening on the Bunny mesh... In top-left of Figure 4 we isolate principle transferability form concept-based transferability in MNIST... In top-middle and right of Figure 4 we test transferability between the Citeseer graph M and its coarsened version G... In Figure 4 bottom, we test the stability of spectral graph filters in the Cora graph with the normalized Laplacian...
Dataset Splits No We train the network on MNIST images of one fixed fine resolution (56X56) and test on images of various coarse resolutions. The graph Laplacian is given by the central difference approximating second derivative. In this setting, the spectral method, Cayley Net, has higher principle transferability than the spatial method, Mo Net. Indeed, its performance degrades slower as we coarsen the grid.
Hardware Specification No The paper describes experimental setups and results but does not specify any particular hardware components such as CPU models, GPU models, or TPU types used for running the experiments.
Software Dependencies No The paper mentions the 'Graclus algorithm (Dhillon et al., 2004)' as a method used for coarsening graphs. However, it does not specify any software libraries or frameworks, along with their version numbers, that were used for implementation or experimentation.
Experiment Setup Yes We consider a simple Conv Net architecture based on three convolution layers with max pooling, where the max pooling in the third layer collapses each graph to one node, and two fully connected layers. In Cayley Net, the Cayley polynomial order of all three convolutional layers is 9, and they produce 32, 32, and 64 output features, respectively. In Mo Net, all three convolutional layers contain 18 Gaussian kernels, and produce 32, 32 and 64 output features, respectively. Both two models contain 10K parameters.