BTBS-LNS: Binarized-Tightening, Branch and Search on Learning LNS Policies for MIP

Authors: Hao Yuan, wenli ouyang, Changwen Zhang, Yong Sun, Liming Gong, Junchi Yan

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show its superior performance over the open-source solver SCIP and LNS baselines. Moreover, it performs competitively with, and sometimes better than the commercial solver Gurobi (v9.5.0), especially on the MIPLIB2017 benchmark chosen by Hans Mittelmann, where our method can deliver 10% better primal gaps compared with Gurobi in a 300s cut-off time.
Researcher Affiliation Collaboration Hao Yuan1, Wenli Ouyang1 , Changwen Zhang1, Yong Sun1, Liming Gong1, Junchi Yan2 1AI Lab, Lenovo Research 2School of Artificial Intelligence & Department of Computer Science and Engineering & Mo E Lab of AI, Shanghai Jiao Tong University EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Bound tightening for Integer variable xi Algorithm 2 Offline training of branching policy for LNS Algorithm 3 Branch and search at the tth step
Open Source Code No The paper does not provide an explicit statement from the authors about releasing their code, nor does it include a direct link to a code repository for the methodology described.
Open Datasets Yes Experiments on seven MIP problems show that our method consistently outperforms the LNS baselines and open-source SCIP (Gamrath et al., 2020). On MIPLIB2017 benchmark2 chosen by Hans Mittelmann, it even achieves superior performance over Gurobi, purely taking SCIP as the baseline solver. 2https://plato.asu.edu/bench.html We also test on two MIP datasets in Machine Learning for Combinatorial Optimization (ML4CO) competition3: Balanced Item Placement (Item) and Anonymous MIPLIB (AMIPLIB), on their official testing instances. 3https://www.ecole.ai/2021/ml4co-competition/
Dataset Splits Yes We generate 200, 20, and 100 instances as training, validation, and testing sets, respectively. For the AMIPLIB problem, which contains a curated set of instances from MIPLIB, we split the instances into train, validation, and test sets by 70%, 15%, and 15% with cross-validation. We perform cross-validation for a comprehensive comparison across all instances, splitting them into training, validation, and testing sets by 70%, 15%, and 15% respectively, at each round.
Hardware Specification Yes We run experiments on an Intel 2.50GHz CPU. In this section, we will further evaluate the GPU version (NVIDIA Ge Force RTX 2080) of our proposed BTBS-LNS on the balanced item placement problem.
Software Dependencies Yes SCIP (v7.0.3), Gurobi (v9.5.0): state-of-the-art open source and commercial solvers
Experiment Setup Yes We train 20 epochs for each instance, with 50 iterations per epoch and a 2s re-optimization time limit per iteration. The graph convolutional layers were set as K = 2 for both policies, with 64-dimensional latent representations for the nodes and edges. Specifically for branching, we set the max branching variables k = 50 in Eq. 5 for the local branching variant. In the inference phase, the branching variable ratio r in Alg. 3 is empirically set to 10% for both branching variants.