Piecewise Constant Spectral Graph Neural Network

Authors: Vahan Martirosyan, Jhony H. Giraldo, Fragkiskos D. Malliaros

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on nine benchmark datasets, including both homophilic and heterophilic graphs, demonstrate that Pie Co N is particularly effective on heterophilic datasets, highlighting its potential for a wide range of applications. The implementation of Pie Co N is available at https://github.com/vmart20/Pie Co N.
Researcher Affiliation Academia Vahan Martirosyan EMAIL Université Paris-Saclay Centrale Supélec, Inria, France; Jhony H. Giraldo EMAIL LTCI, Télécom Paris Institut Polytechnique de Paris, France; Fragkiskos D. Malliaros EMAIL Université Paris-Saclay Centrale Supélec, Inria, France
Pseudocode Yes Algorithm 1 Thresholding Algorithm for Identifying Significant Eigenvalue Gaps
Open Source Code Yes The implementation of Pie Co N is available at https://github.com/vmart20/Pie Co N.
Open Datasets Yes We evaluate Pie Co N on seven diverse node classification datasets with varying graph structures and homophily ratios (Table 1). Cora, Citeseer, and Pubmed are citation networks where nodes are research papers and edges represent citations. Photo is a product co-occurrence graph with nodes as products and edges representing co-purchase relationships. Actor is a graph where nodes are actors and edges denote co-occurrence in films. Chameleon and Squirrel are graphs derived from Wikipedia pages. Nodes represent web pages, and edges denote mutual links. Texas is an academic web graph where nodes are webpages from the University of Texas and edges represent hyperlinks between pages. Amazon-Ratings is a product co-purchasing network where nodes are products and edges indicate frequent co-purchases, with the task of predicting product rating classes.
Dataset Splits Yes All datasets were randomly split into 60% training, 20% validation, and 20% test sets for 10 different seeds.
Hardware Specification Yes All experiments were carried out on a Linux machine with an NVIDIA A100 GPU, Intel Xeon Gold 6230 CPU (20 cores @ 2.1GHz), and 24GB RAM.
Software Dependencies No The paper mentions "The Adam optimizer was used for training" and "Hyperparameter tuning was performed using the Hyperopt Tree of Parzen Estimators (TPE) algorithm (Bergstra et al., 2011)" but does not provide specific version numbers for these software components.
Experiment Setup Yes Hyperparameter tuning was performed using the Hyperopt Tree of Parzen Estimators (TPE) algorithm (Bergstra et al., 2011) with the hyperparameter ranges shown in Table 4. The Adam optimizer was used for training with 2,000 epochs. Hyperparameters were selected to achieve the best performance on a validation set.