Don’t Restart, Just Reuse: Reoptimizing MILPs with Dynamic Parameters

Authors: Sijia Zhang, Shuli Zeng, Shaoang Li, Feng Wu, Shaojie Tang, Xiangyang Li

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across nine reoptimization datasets show that our VP-OR outperforms the state-of-the-art methods, achieving higher-quality solutions under strict time limits. Our experiments consist of three main parts: Experiment 1: Evaluate different methods on nine public reoptimization datasets, focusing on whether they can quickly find feasible solutions within the 10-second time limit. Experiment 2: Assess the quality of the feasible solutions obtained within the 10-second limit. Experiment 3: To provide a more intuitive comparison of solution convergence speeds, we plot the relative primal gap over time under a larger time limit of 100 seconds.
Researcher Affiliation Academia 1School of Computer Science and Technology, University of Science and Technology of China 2Department of Management Science and Systems, State University of New York at Buffalo. Correspondence to: Xiang-Yang Li <EMAIL>, Feng Wu <EMAIL>.
Pseudocode Yes Algorithm 1 Overall thompson sampling framework. Algorithm 2 The parameter update process algorithm. Algorithm 3 Relaxation Mechanism
Open Source Code No The paper does not explicitly state that its own source code is available or provide a direct link to a repository for the methodology described.
Open Datasets Yes We select 9 series of instances from the MIP Computational Competition 2023 (Bolusani et al., 2023) to evaluate our approach... Most of these series are based on instances from the MIPLIB 2017 benchmark library (Gleixner et al., 2021)... The publicly available dataset from the MIP Workshop 2023 Computational Competition on Reoptimization (Bolusani et al., 2023) is limited in size...
Dataset Splits Yes Each dataset contains 50 instances. To facilitate the experiments, we pair the instances in groups of two, resulting in 25 groups, including 20 groups in the training set and 5 groups in the test set. The first instance in each group serves as the historical instance, for which intermediate solving information required for feature extraction is pre-recorded... To further increase the number of test samples, we generate similar datasets using bnd 1 as a testing example, employing a method consistent with that published by the competition organizers. The results presented in Appendix E.8 are consistent with the tests shown in Table 4 of the main text.
Hardware Specification Yes The training process is conducted on a single machine that contains eight GPU devices (NVIDIA Ge Force RTX 4090) and two AMD EPYC 7763 CPUs.
Software Dependencies Yes The model was implemented in Py Torch (Paszke et al., 2019) and optimized using Adam (Kingma & Ba, 2014) with training batch size of 16... Throughout all experiments, we use SCIP 8.0.4 (Bestuzheva et al., 2021) as the backend solver...
Experiment Setup Yes In our experiments, we include only one parameter: the percentage of fixed variables P. In this section, we present the results for P = 0.7. Results for other values of P are provided in Appendix E.3... We apply a time limit of 10 seconds for each method... We choose G = 10 in our evaluation.