Graphon Neural Networks and the Transferability of Graph Neural Networks
Authors: Luana Ruiz, Luiz Chamon, Alejandro Ribeiro
NeurIPS 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 6, transferability of GNNs is illustrated in two numerical experiments. |
| Researcher Affiliation | Academia | Luana Ruiz Dept. of Electrical and Systems Eng. University of Pennsylvania Philadelphia, PA 19143 EMAIL Luiz F. O. Chamon Dept. of Electrical and Systems Eng. University of Pennsylvania Philadelphia, PA 19143 EMAIL Alejandro Ribeiro Dept. of Electrical and Systems Eng. University of Pennsylvania Philadelphia, PA 19143 EMAIL |
| Pseudocode | No | The paper describes algorithms and architectures mathematically but does not include any pseudocode blocks or algorithms labeled as such. |
| Open Source Code | Yes | We use the GNN library available at https://github.com/alelab-upenn/graph-neural-networks and implemented with Py Torch. |
| Open Datasets | Yes | To illustrate Theorem 2 in a graph signal classification setting, we consider the problem of movie recommendation using the Movie Lens 100k dataset (Harper and Konstan, 2016). |
| Dataset Splits | Yes | This data is then split between 90% for training and 10% for testing, with 10% of the training data used for validation. |
| Hardware Specification | No | The paper mentions funding from 'Intel Dev Cloud' in the acknowledgments but does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for the experiments. |
| Software Dependencies | Yes | We use the GNN library available at https://github.com/alelab-upenn/graph-neural-networks and implemented with Py Torch. |
| Experiment Setup | Yes | This GNN has L = 1 convolutional layer with F = 32 and K = 5, followed by a readout layer at node 405 that maps its features to a one-hot vector of dimension C = 5 (corresponding to ratings 1 through 5). To generate the input data, we pick the movies rated by user 405 and generate the corresponding movie signals by "zero-ing" out the ratings of user 405 while keeping the ratings given by other users. This data is then split between 90% for training and 10% for testing, with 10% of the training data used for validation. Only training data is used to build the user network in each split. To analyze transferability, we start by training GNNs Φ(Hn; Sn; xn) on user subnetworks consisting of random groups of n = 100, 200, . . . , 900 users, including user 405. We optimize the cross-entropy loss using ADAM with learning rate 10 3 and decaying factors β1 = 0.9 and β2 = 0.999, and keep the models with the best validation RMSE over 40 epochs. |