Primphormer: Efficient Graph Transformers with Primal Representations
Authors: Mingzhen He, Ruikai Yang, Hanling Tian, Youmei Qiu, Xiaolin Huang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the empirical performance of Primphormer on various graph benchmarks. |
| Researcher Affiliation | Academia | 1Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China. Correspondence to: Xiaolin Huang <EMAIL>. |
| Pseudocode | Yes | D. Pseudo-code Algorithm 1 Py Torch-like Pseudo-Code for Primphormer Module. Algorithm 2 Algorithm for Primphormer in the GPS architecture. |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing the code or a link to a code repository. There is no mention of code being available in supplementary materials or upon request. |
| Open Datasets | Yes | In particular, we conducted experiments on the benchmark datasets including the image-based graph datasets CIFAR10, MNIST, COCO-SP, and Pascal VOC-SP; the synthetic SBM datasets PATTERN and CLUSTER; the code graph dataset Mal Net-Tiny; the molecular datasets including Peptides-Func, Peptides-Struct, and PCQM-Contact (Dwivedi et al., 2022a; Freitas et al., 2021; Dwivedi et al., 2022b; 2023); the large-scale ogbn-products dataset (Hu et al., 2020), and the graph isomorphism benchmark BREC (Wang & Zhang, 2024). |
| Dataset Splits | No | The paper states: 'We use the random partitioning method previously used by Wu et al. (2022; 2023a)' for the ogbn-products dataset. However, for other datasets like CIFAR10, MNIST, etc., the specific train/test/validation split percentages or methodologies are not explicitly provided within this paper. |
| Hardware Specification | No | The paper mentions 'Since ogbn-products is too large to be loaded into GPU', indicating GPU usage, but does not provide specific details such as GPU models (e.g., NVIDIA A100, RTX 2080 Ti), CPU models, or other detailed hardware specifications for the experiments. |
| Software Dependencies | No | The paper includes 'Py Torch-like Pseudo-Code', suggesting the use of PyTorch, but it does not specify the version number of PyTorch or any other software dependencies with their versions. |
| Experiment Setup | Yes | Our selection of hyperparameters was guided by the instructions in GPS (Rampasek et al., 2022) and Exphormer (Shirzad et al., 2023). Further details can be found in Tables. A3A4. ... We utilized grid search to explore these hyperparameters across Ns, s {20, 30, 40, 50, 60}, and η {0.1, 0.01}. For the remaining hyperparameters, we conducted a linear search for each parameter to determine the best values. Throughout all experiments, we employed Custom Gated GCN as the MPNN module alongside Primphormer except for ogbn-products dataset where we use GCN. |