COExpander: Adaptive Solution Expansion for Combinatorial Optimization

Authors: Jiale Ma, Wenzheng Pan, Yang Li, Junchi Yan

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To ensure transparent evaluation, we further take the lead to canonicalize 29 benchmarks spanning 6 popular COPs (MIS, MCl, MVC, MCut, TSP, ATSP) and various scales (50-10K nodes), upon which experiments demonstrate concrete SOTA performance of COExpander over these tasks.
Researcher Affiliation Academia 1Sch. of Artificial Intelligence & Sch. of Computer Science, Shanghai Jiao Tong University 2Shanghai Innovation Institute. Correspondence to: Junchi Yan <EMAIL>.
Pseudocode Yes Algorithm 1 Multi-step solution expanding. Algorithm 2 RLSA as Post-Processor.
Open Source Code Yes Code available at github repository.
Open Datasets Yes For node-selection problems, we have generated datasets of three types, RB (Xu et al., 2005), ER (Erd6s & R enyi, 1960), and BA (Barab asi & Albert, 1999). Following Meta-EGN (Wang & Li, 2023), we conduct experiments for MCl, MVC on two real datasets Twitter (Jure, 2014) and COLLAB (Yanardag & Vishwanathan, 2015). For MIS and TSP, we follow DIFUSCO (Sun & Yang, 2023) to use SATLIB and TSPLIB. Detailed information regarding the datasets are introduced in Appendix E.
Dataset Splits Yes Table 12. List of our re-collated datasets benchmarking six COPs for training and testing COExpander and any related neural solvers. The MODEL column denotes the dataset on which the model is trained. ... Training ... DATA SIZE ... Testing ... DATA SIZE ... For MCl and MVC, we train COExpander on RB-SMALL and test it on Twitter and COLLAB. ... For MIS and TSP, we follow DIFUSCO (Sun & Yang, 2023) to use SATLIB and TSPLIB. For Twitter Graph dataset (Jure, 2014) and COLLAB dataset (Yanardag & Vishwanathan, 2015)... we select 20% of the data as the generalization test sets.
Hardware Specification Yes Hardware. All models are trained and tested using NVIDIA H800 (80G) GPUs and an Intel(R) Xeon(R) Platinum 8558 96-Core Processor CPU. For all test evaluations, a single GPU is utilized, and the batch size is set to 1 to ensure a fair comparison of the solving time across different models.
Software Dependencies Yes Note that we use the version 11.0.34 [Gurobi]... The version of LKH we used is 3.0.7... We use the version 0.3.0 [Ka MIS]... We use the version 03.12.19 [Concorde]... We use the version 1.0 [GA-EAX]...
Experiment Setup Yes Training Settings. During the training phase, we set the number of convolution layers to 12, the learning rate to 0.0002, and the number of training epochs to 50. During the fine-tuning phase, we set the learning rate to 0.00005 and the number of training epochs to 10. Detailed parameters are provided in Appendix F. We have organized the training settings and model parameters of COExpander in Table 13. In the table, we adopt the name of the targeted training data to denote each individually trained model (21 in total) throughout this paper.