Expressive Pooling for Graph Neural Networks

Authors: Veronica Lachi, Alice Moallemy-Oureh, Andreas Roth, Pascal Welke

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our theoretical findings and evaluate the expressivity-increasing capabilities of various pooling operators, we compare their empirical expressivity using the BREC framework (Wang & Zhang, 2024). Our reproducible implementation is publicly available 1. ... The results confirm our theoretical findings. Node-based pooling approaches fail to enhance expressivity, as evidenced by their inability to distinguish any graph structures across all tested scenarios. This aligns with our theoretical predictions, given that these methods primarily operate at the node level without significantly leveraging the graph topology. In contrast, edge-based methods successfully improve expressivity, with XP demonstrating the highest performance.
Researcher Affiliation Academia Veronica Lachi EMAIL Fondazione Bruno Kessler Alice Moallemy-Oureh EMAIL University of Kassel Andreas Roth EMAIL TU Dortmund Pascal Welke EMAIL Lancaster University Leipzig and TU Wien
Pseudocode No The paper includes definitions and theorems, proofs, and illustrative figures, but it does not contain any clearly labeled pseudocode or algorithm blocks with structured steps.
Open Source Code Yes Our reproducible implementation is publicly available 1. 1https://github.com/aliicee3/brec-pooling
Open Datasets Yes To validate our theoretical findings and evaluate the expressivity-increasing capabilities of various pooling operators, we compare their empirical expressivity using the BREC framework (Wang & Zhang, 2024). ... The BREC dataset comprises 400 pairs of non-isomorphic, WL-indistinguishable graphs, which allows for an empirical evaluation of various methods in their ability to distinguish those graphs.
Dataset Splits No The BREC dataset comprises 400 pairs of non-isomorphic, WL-indistinguishable graphs... The graphs are divided into four main categories: Basic, Regular, Extension, and CFI graphs. Basic Graphs include 60 pairs of 10-node graphs, Regular Graphs consist of 140 pairs, including simple and strongly regular graphs, Extension Graphs are made up of 100 pairs inspired by GNN extensions, and CFI Graphs, the most complex category includes 100 pairs based on the Cai-Fürer-Immerman method (Cai et al., 1992). 270 of these graphs are 3-WL distinguishable. The paper describes the composition of the dataset used for evaluation but does not specify any training/testing/validation splits for the models.
Hardware Specification Yes All experiments were run in parallel, with each instance on a Platinum 8468 CPU and 5 GB RAM, apart from the experiments with ASAPool, as those required 100 GB RAM.
Software Dependencies No Our implementation is based on the BREC framework (Wang & Zhang, 2024) and PyTorch Geometric (Fey & Lenssen, 2019). The paper mentions the software frameworks used but does not provide specific version numbers for these dependencies.
Experiment Setup Yes The initial feature of each node is determined by its degree. We apply a total of twelve MPNN layers, with one pooling layer added after four and after eight MPNN layers. Global sum pooling, followed by a two-layer MLP, is used to produce graph-level embeddings that are optimized to differ between nonisomorphic graphs. As MPNN layers, we used the GIN layer (Xu et al., 2019) due to its ability to match the expressivity of the WL test. As pooling layers, we evaluate all methods described in Section 5, which can be divided into node-based operators, edge-based operators, and operators using an expressive SEL function. ... We apply k GIN layers followed by one pooling step, and repeat this process l times. After these steps, another k GIN layers are employed. Thus, the model consists of a total of l pooling steps and k (l + 1) GIN instantiations.