BOPO: Neural Combinatorial Optimization via Best-anchored and Objective-guided Preference Optimization

Authors: Zijun Liao, Jinbiao Chen, Debing Wang, Zizhen Zhang, Jiahai Wang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Job-shop Scheduling Problem (JSP), Traveling Salesman Problem (TSP), and Flexible Job-shop Scheduling Problem (FJSP) show BOPO outperforms state-of-the-art neural methods, reducing optimality gaps impressively with efficient inference.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Sun Yat-sen University, China. Correspondence to: Zizhen Zhang <EMAIL>.
Pseudocode Yes Algorithm 1 BOPO Training
Open Source Code Yes Our implementation of BOPO using Py Torch and trained models for each problem are available.1 1https://github.com/L-Z-7/BOPO
Open Datasets Yes For evaluation, we use three standard JSP benchmarks: Taillard s (TA) (Taillard, 1993), Lawrence s (LA) (Lawrance, 1984), and Demirkol s (DMU) (Demirkol et al., 1998).
Dataset Splits Yes We generate a training dataset of 30000 instances following SLIM (Corsini et al., 2024), consisting of 6 shapes (n m) in {10 10, 15 10, 15 15, 20 10, 20 15, 20 20} with 5000 instances per shape. During training, we generate additional 100 different instances per shape from the same shape set for validation.
Hardware Specification Yes Experiments were conducted on a Linux system with an NVIDIA TITAN Xp GPU and an Intel(R) Xeon(R) E52680 CPU.
Software Dependencies No Our implementation of BOPO using Py Torch and trained models for each problem are available. (Only mentions PyTorch, but no version number, and no other specific software/library versions).
Experiment Setup Yes We employ the Adam optimizer (Kingma & Ba, 2014) with learning rate η = 0.0002 and train the neural model for 20 epochs. We set the solution number of hybrid rollout B = 256, the number of filtered solutions K = 16, batch size of D = 1.