Towards a Complete Logical Framework for GNN Expressiveness
Authors: Tuo Xu
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we perform numerical experiments to empirically validate our theoretical results. The motivation of this section is to test the logical expressivity of several GNN models by checking whether they are able to express specific logic formulas on synthetic data. |
| Researcher Affiliation | Academia | Tuo Xu Independent Researcher EMAIL |
| Pseudocode | Yes | Algorithm 1: The 1-WL test (color refinement) Input : G = (A, X) |
| Open Source Code | No | No explicit statement or link for open-source code was found in the paper. |
| Open Datasets | No | We randomly generate graphs as follows. We consider Erd os-Renyi graphs, which are random graphs by specifying N the number of nodes and p the possibility for each edge to exist. We then randomly color each node with a specified probability. |
| Dataset Splits | Yes | Each train and test graph contains 500 nodes. Each test-larger graph contains 1000 nodes. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory, or cloud instances) are provided in the paper for running experiments. |
| Software Dependencies | No | For all GNNs we choose the aggregation AGG to be sum, and the combination function COM to be COM(x, y) = W [x T , y T ]T , where W is a parameter matrix. We refer to GNN-k as the GNN model with k layers, i.e. MPNN-1 refers to a MPNN which has one aggregation-combination layer. For the last layer, the sigmoid function is selected as the activation function. For the rest of the layers, the Re LU function is selected as the activation function. |
| Experiment Setup | Yes | For all GNNs we choose the aggregation AGG to be sum, and the combination function COM to be COM(x, y) = W [x T , y T ]T , where W is a parameter matrix. We refer to GNN-k as the GNN model with k layers, i.e. MPNN-1 refers to a MPNN which has one aggregation-combination layer. For the last layer, the sigmoid function is selected as the activation function. For the rest of the layers, the Re LU function is selected as the activation function. |