Heterophily-Aware Personalized PageRank for Node Classification

Authors: Giuseppe Pirrò

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments validate our method s state-of-the-art performance across challenging heterophilous benchmarks. ... Section 7: Experimental Evaluation. ... Table 3: Node classification results; accuracy (Wiki-cooc, Roman-empire, Amazon-ratings, Squirrel-F, Chameleon-F) and ROC-AUC scores (Minesweeper, Tolokers, Questions, ar Xiv-year). Bold and underlined indicate best and second-best results. The two ablation studies compare different feature transformations (with fixed logistic regression classifier) and classifiers (with fixed feature transformation SGC with k=3).
Researcher Affiliation Academia Giuseppe Pirr o Department of Mathematics and Computer Science, University of Calabria 87046, Rende (CS), Italy EMAIL
Pseudocode Yes Algorithm 1 Heterophily-Aware Personalized Page Rank Input: 1: Graph G = (V, E) with node features X and pseudo-labels Y 2: Parameters: damping α (0, 1), local global restart β [0, 1], balance γ [0, 1] 3: max iter, convergence tolerance ϵ Output: 4: H-PPR score dictionary {πu : u V} 5: function H-PPR(G, X, Y, α, β, γ, max iter, ϵ) ...
Open Source Code Yes A more comprehensive discussion is available online1. 1https://github.com/giuseppepirro/happy
Open Datasets Yes We considered state-of-the-art hetherophilous datasets [Platonov et al., 2023b] ... These enhanced datasets are larger and cover a broader range of domains, as summarized in Table 2. 2https://github.com/yandex-research/heterophilous-graphs
Dataset Splits Yes We used the dataset splits provided by [Platonov et al., 2023b] and available online2 The authors fix 10 random 50%/25%/25% train/validation/test splits.
Hardware Specification Yes We ran experiments on a Mac Studio M2 Ultra with a 24-core CPU, 60-core GPU, and 32-core Neural Engine with 192GB of unified memory.
Software Dependencies No We implemented1 in Py Torch3 and MLX4 and integrated it into the evaluation pipeline provided by Platonov et al. [Platonov et al., 2023b]... Footnotes 3 and 4 link to pytorch.org and ml-explore/mlx respectively, but no specific version numbers for PyTorch or MLX are provided in the text.
Experiment Setup Yes We tuned random walk controls (α, β [0.1, 0.9]), computational settings (max iter [100, 1000], ϵ [10 6, 10 8]), and SGC iterations K (2-4). A detailed ablation analysis is discussed below. ... Ablation Study 1: Feature Transformation Analysis (fixed two-layer feed-forward-network used as classifier). SGC (k=2) ... SGC (k=4) ... GCN (l=2) ... GCN (l=3) ... GAT (l=2) ... GAT (l=3). Ablation Study 2: Classifier Analysis (fixed feature transformation via SGC with k=3). FFW3 ... Logistic ... SVM.