LocalFormer: Mitigating Over-Globalising in Transformers on Graphs with Localised Training

Authors: Naganand Yadati

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate the effectiveness of Local Former compared to state-of-the-art baselines on vertex-classification tasks.
Researcher Affiliation Academia Naganand Yadati EMAIL Independent Researcher
Pseudocode No The paper describes the methods using mathematical equations (e.g., Equation 1, 3, 4) and textual descriptions of the training schemes, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about the release of source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We choose a total of ten datasets for our evaluation. Five of them are the homophilic graphs Computer, Photo, CS, Physics Shchur et al. (2018), and Wiki CS Mernyei and Cangea (2020). The remaining five are the heterophilic graphs roman-empire, amazon-rarings, minesweeper, tolokers, and questions Platonov et al. (2023a;b).
Dataset Splits Yes We utilise the public splits from prior work for these datasets. These splits are divided into training, validation, and test sets, maintaining a 50%:25%:25% ratio. Please Appendix Section A.1 for more details on the datasets. For the Computer, Photo, CS, and Physics datasets, we follow the standard practice of randomly splitting the vertices into training (60%), validation (20%), and test (20%) sets Chen et al. (2023); Shirzad et al. (2023). For the other datasets, we use the official splits provided in previous studies Platonov et al. (2023a;b).
Hardware Specification No The paper does not explicitly mention any specific hardware used for running the experiments (e.g., specific GPU or CPU models, or cloud resources).
Software Dependencies No The paper describes the models and experimental setup, but it does not list specific software dependencies with their version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes For all models, including our proposed model and the baseline models, we tune the hyperparameters using a grid search approach. The hyperparameter values that yield the best performance on the validation set are selected. ... Table 4: Hyperparameters of Local Former per dataset.