Diss-l-ECT: Dissecting Graph Data with Local Euler Characteristic Transforms

Authors: Julius von Rohrscheidt, Bastian Rieck

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present experiments to empirically evaluate the performance of the ℓ-ECT-based approach in graph representation learning, focusing on nodeclassification tasks. We aim to demonstrate how ℓ-ECT representations can capture structural information more effectively than traditional message-passing mechanisms, especially in scenarios with high heterophily (even though we consider other scenarios as well). Our experiments compare the performance of ℓ-ECT-based models to several standard GNN models, namely graph attention networks (Veliˇckovi c et al., 2018, GAT), graph convolutional networks (Kipf & Welling, 2017, GCN), graph isomorphism networks (Xu et al., 2019, GIN), as well as a heterophily-specific architecture (Zhu et al., 2020, H2GCN).
Researcher Affiliation Academia 1Institute of AI for Health, Helmholtz Munich, Germany 2Technical University of Munich, Germany 3University of Fribourg, Switzerland. Correspondence to: Julius von Rohrscheidt <EMAIL>, Bastian Rieck <EMAIL>.
Pseudocode No The paper describes methods using mathematical formulations and descriptive text, but it does not include any explicitly labeled pseudocode or algorithm blocks with structured, code-like formatting.
Open Source Code Yes Our code is available under https://github.com/aidos-lab/Diss-l-ECT.
Open Datasets Yes Web KB Datasets For all datasets of the Web KB collection (Pei et al., 2020)... Heterophilous Datasets Platonov et al. (2023) introduced several heterophilous datasets... Amazon dataset The Amazon dataset (Shchur et al., 2018) consists of the two co-purchase graphs Computers and Photo... Actor/Wikipedia Datasets Moving to additional heterophilous datasets with high feature dimensionality, we compare predictive performance on Actor (Pei et al., 2020) as well as Chameleon and Squirrel (Rozemberczki et al., 2021)... Planetoid Datasets We also analyze node-classification performance on datasets from the Planetoid collection (Yang et al., 2016)... We make use of standard benchmarking datasets, loaded and processed via the Py Torch Geometric library (Fey & Lenssen, 2019).
Dataset Splits Yes Planetoid Datasets We also analyze node-classification performance on datasets from the Planetoid collection (Yang et al., 2016), comprising Cora, Cite Seer, and Pub Med. We trained all models using a random 75 25 split; cf. Table 5.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No We make use of standard benchmarking datasets, loaded and processed via the Py Torch Geometric library (Fey & Lenssen, 2019).
Experiment Setup Yes Implementation details We use m = l = 64 (but the number of samples may be tuned in practice) and use the the resulting m l-dimensional vector(s) ECT(Nk(x; G))(m,l), together with the feature vector of x, as additional inputs for the classifier. The architecture of our baseline models includes a two-layer MLP after every graph-neighborhood aggregation layer, as well as skip connections and layer normalization. We train each model for 1000 epochs and report the test accuracy corresponding to the state of the model that admits the maximum validation accuracy during training.