EFormer: An Effective Edge-based Transformer for Vehicle Routing Problems
Authors: Dian Meng, Zhiguang Cao, Yaoxin Wu, Yaqing Hou, Hongwei Ge, Qiang Zhang
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the Traveling Salesman Problem (TSP) and Capacitated Vehicle Routing Problem (CVRP) reveal that EFormer outperforms established baselines on synthetic datasets, including large-scale and diverse distributions. Moreover, EFormer demonstrates strong generalization on real-world instances from TSPLib and CVRPLib. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Dalian University of Technology (DUT) 2School of Computing and Information Systems, Singapore Management University 3Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology 4Key Laboratory of Social Computing and Cognitive Intelligence (DUT), Ministry of Education, China |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical equations (e.g., Eq. 1-19) and provides architectural diagrams (Figure 1, Figure 2), but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available2. 2https://github.com/Regina921/EFormer |
| Open Datasets | Yes | Extensive experiments on the Traveling Salesman Problem (TSP) and Capacitated Vehicle Routing Problem (CVRP) reveal that EFormer outperforms established baselines on synthetic datasets, including large-scale and diverse distributions. Moreover, EFormer demonstrates strong generalization on real-world instances from TSPLib and CVRPLib. |
| Dataset Splits | Yes | Each training epoch samples 100,000 random instances, while a separate set of 10,000 uniformly generated instances is used for testing. |
| Hardware Specification | No | The paper mentions "Despite device limitations" but does not specify any particular CPU, GPU, or other hardware used for the experiments. |
| Software Dependencies | No | The paper mentions using "REINFORCE algorithm" and comparing against classical solvers like "Concorde [Cook et al., 2011], LKH3 [Helsgaun, 2017], HGS [Vidal, 2022], and OR-Tools [Perron and Furnon, 2023]", but it does not provide specific version numbers for any software dependencies used in its own implementation. |
| Experiment Setup | Yes | We train EFormer using reinforcement learning in an autoregressive manner. We sample a set of n trajectories {τ 1, , τ n}. Each training epoch samples 100,000 random instances, while a separate set of 10,000 uniformly generated instances is used for testing. We adopt the POMO inference algorithm [Kwon et al., 2020] and report both the optimality gap and inference time. For EFormer specifically, we present results for greedy inference (x1) and instance augmentation (x8 and x128). |