What Are Good Positional Encodings for Directed Graphs?
Authors: Yinan Huang, Haoyu Wang, Pan Li
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our numerical experiments validate the expressiveness of the proposed PEs and demonstrate their effectiveness in solving sorting network satisfiability and performing well on general circuit benchmarks. Our code is available at https://github.com/Graph-COM/Multi-q-Maglap. In this section, we evaluate the effectiveness of multi-q Mag-PEs by studying the following questions: Q1: How good are the previous PEs and our proposed PEs at expressing directed distances/relations, e.g., directed shortest/longest path distances and the walk profile? Q2: How do these PE methods perform on practical tasks and real-world datasets? Q3: What is the impact on using PEs with or without basis-invariant/stable architectures? Table 1: Test RMSE results over 3 random seeds for node-pair distance prediction. Table 2: Test F1 scores over 5 random seeds for sorting network satisfiability. Table 3: Test results (RMSE for Gain/BW/PM, MSE for DSP/LUT) for Open Circuit Benchmark and High-level Synthesis. |
| Researcher Affiliation | Academia | Yinan Huang Georgia Institute of Technology EMAIL Haoyu Wang Georgia Institute of Technology EMAIL Pan Li Georgia Institute of Technology EMAIL |
| Pseudocode | No | The paper describes methods and frameworks (e.g., in Section 4.4 and Figure 2) but does not include any clearly labeled pseudocode or algorithm blocks with structured steps. |
| Open Source Code | Yes | Our code is available at https://github.com/Graph-COM/Multi-q-Maglap. |
| Open Datasets | Yes | Open Circuit Benchmark (Dong et al., 2022a) contains 10,000 operational amplifiers circuits as directed graphs and the task is to predict the DC gain (Gain), band width (BW) and phase margin (PM) of each circuit. The HLS dataset (Wu et al., 2022) collects 18, 750 intermediate representation (IR) graphs of C/C++ code after front-end compilation (Alfred et al., 2007). |
| Dataset Splits | Yes | we sample regular directed graphs with average node degree drawn from {1, 1.5, 2}, or directed acyclic graphs with average node degree from {1, 1.5, 2, 2.5, 3}. In both cases, there are 400,000 samples for training and validation (graph size from 16 to 63, training:validation=95:5), and 5,000 samples for test (graph size from 64 to 71). Sorting network... The dataset contains 800k training samples with a length (the number of variables to sort) from 7 to 11, 60k validation samples with a length 12, and 60k test samples with a length from 13 to 16. Open Circuit Benchmark... We randomly split them into 0.9:0.05:0.05 as training, validation and test set. HLS dataset... We randomly select 16570 for training, and 1000 each for validation and testing. |
| Hardware Specification | Yes | We use Quadro RTX 6000 on Linux system to train the models. |
| Software Dependencies | No | The paper does not explicitly list specific software components with version numbers (e.g., Python, PyTorch, CUDA versions) that are ancillary to the experiments. |
| Experiment Setup | Yes | Key hyperparameters are included in the main text while full details of the experiment setup and model configurations can be found in Appendix B. Table 4: Hyperparameter for walk profile/shortest path distance/longest path distance prediction on regular directed graphs. Table 5: Hyperparameter for walk profile/shortest path distance/longest path distance prediction on directed acyclic graphs. Table 6: Hyperparameter for sorting network prediction. Table 7: Hyperparameter for bidirected GIN on Open Circuit Benchmark. Table 8: Hyperparameter for undirected GIN on Open Circuit Benchmark. Table 9: Hyperparameter for SAT (each layer uses bidirected GIN as kernel) on Open Circuit Benchmark. Table 10: Hyperparameter for SAT (each layer uses undirected GIN as kernel) on Open Circuit Benchmark. Table 11: Hyperparameter for bidirected GIN on High-level Synthetic dataset. Table 12: Hyperparameter for undirected GIN on High-level Synthetic dataset. Table 13: Hyperparameter for SAT (2-hop bidi. GIN) on High-level Synthetic dataset. Table 14: Hyperparameter for SAT (2-hop bidi. GIN) on High-level Synthetic dataset. |