Is uniform expressivity too restrictive? Towards efficient expressivity of GNNs

Authors: Sammy Khalife, Josué Tonelli-Cueto

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we demonstrate how the size of the neural networks impacts the GNN s ability to express GC2 queries via two experiments. ...Our experiments illustrates that our theoretical estimates hold in practice.
Researcher Affiliation Academia Sammy Khalife EMAIL Josu e Tonelli-Cueto EMAIL ...School of Operations Research and Information Engineering, Cornell Tech, Cornell University, NY, USA Department of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, USA
Pseudocode No The paper describes methods conceptually and mathematically. It does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The details of the implementation are provided in the supplementary material.
Open Datasets No For the first experiment, our set of input graphs is composed of pairs of trees (T[0, k, m], T[1, k, m])k N,m N presented in Section 4. These are unicolored for Q1 and colored as follows for Q2: the root s and the xi are red, and the leaves are blue. ...Our training dataset is composed of 3750 graphs of varying size (between 50 and 2000 nodes) of average degree 5, and generated randomly. Similarly, our testing dataset is composed of 1250 graphs with varying size between 50 and 2000 vertices, of average degree 5. The paper describes how the datasets were generated (synthetic, randomly generated graphs) but does not provide specific access information (link, DOI, formal citation) to a publicly available dataset, nor does it state that the generated datasets are made public.
Dataset Splits Yes Our training dataset is composed of 3750 graphs of varying size (between 50 and 2000 nodes) of average degree 5, and generated randomly. Similarly, our testing dataset is composed of 1250 graphs with varying size between 50 and 2000 vertices, of average degree 5.
Hardware Specification Yes The experiments were conducted on a Mac-OS laptop with an Apple M1 Pro chip.
Software Dependencies Yes The neural networks were implemented using Py Torch 2.3.0 and Pytorch geometric 2.2.0.
Experiment Setup Yes In our experiments, we consider the following two queries: Q1(v) := 1y E(y, v) 2v (E(v, y) T) = y E(y, v) 2z E(z, y) Q2(v) = Red(v) 1x E(x, v) 1v (E(v, x) Blue(v)) 1v (E(v, x) Red(v))... For Q1, the Re LU GNN has 4 iterations (depth of Q1); and for Q2, the Re LU GNN has 7 iterations (depth of Q2). The combination functions are of the form combt(x, y) = Re LU(Ax + By + C), where x, y Rd, t {0, . . . , d 1} and d = 4 for Q1 and d = 7 for Q2. The parameters A, B and C are specified in Appendix D. ...Both GNNs have 4 and 7 iterations when trained to learn the first and second query respectively. Each iteration is attributed his own combine function, a feedforward neural network with one hidden layer.