Adversarial Generative Flow Network for Solving Vehicle Routing Problems
Authors: Ni Zhang, Jingfeng Yang, Zhiguang Cao, Xu Chi
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | our experimental results demonstrate that AGFN surpasses the popular construction-based neural solvers, showcasing strong generalization capabilities on synthetic and real-world benchmark instances. Our code is available at https://github.com/ZHANG-NI/AGFN. We perform a comprehensive evaluation of our AGFN, and experimental results on both synthetic and benchmark instances confirm its competitiveness against traditional and neural baselines, with effective generalization to varying problem sizes and distributions. |
| Researcher Affiliation | Academia | 1Singapore Management University, Singapore 2Singapore Institute of Manufacturing Technology (SIMTech), Agency of Science, Technology and Research (A*STAR), Singapore EMAIL, Yang EMAIL EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology and GNN operations using equations and descriptive text, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/ZHANG-NI/AGFN. |
| Open Datasets | Yes | Dataset: Following previous works (Kim et al., 2024b; Kwon et al., 2020), we perform CVRP experiments using synthetic datasets for both training and testing. Each synthetic test dataset, corresponding to 200, 500, and 1, 000 nodes, contains 128 instances. ... Additionally, we also evaluate the performance of AGFN trained on synthetic datasets with 100 nodes, as well as the POMO and GFACS models, on the CVRPLib (Uchoa et al., 2017) and TSPLib (Reinelt, 1991) benchmarks. |
| Dataset Splits | Yes | Each synthetic test dataset, corresponding to 200, 500, and 1, 000 nodes, contains 128 instances. ... All neural models, including POMO, Neu Opt, GANCO, and ours, were trained on synthetic instances with 100 nodes (since those baselines are hard to train on more than 100 nodes), and tested on 200, 500, and 1,000 nodes. ... The 2,000 and 3,000 node scales contain 128 instances each, while the 5,000 and 10,000 node scales contain 64 instances each. |
| Hardware Specification | Yes | All experiments were conducted on a server equipped with an NVIDIA Tesla V100-32G GPU and an Intel Xeon Gold 6148 CPU. |
| Software Dependencies | No | The paper mentions using specific activation functions and model architectures (e.g., Si LU, GNN, MLP) and cites previous work, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used (e.g., PyTorch, TensorFlow, Python). |
| Experiment Setup | Yes | Hyperparameters: The number of directed edges, k, originating from a single node in the sparse edge set, |E |, is set to |V|/4. For training the model, we only employ the sampling decoding (without greedy selection) for route generation, with the number of sampled routes per instance, N, set to 20. The ratio of training rounds between the generator and the discriminator is maintained at 4 : 1. During testing, hybrid decoding is used for route generation, with N set to 100 and the P in Eq. (8), set to 0.05 (see Appendix B for more details on the selection of P). ... In our experiments, we set a = 1 and b = 9, with a fixed vehicle capacity of C = 50 for all problem sizes: 200, 500, and 1, 000 customers. |