Differentiable Integer Linear Programming

Authors: Zijie Geng, Jie Wang, Xijun Li, Fangzhou Zhu, Jianye HAO, Bin Li, Feng Wu

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section presents empirical results to demonstrate the effectiveness of our proposed Diff ILO in (1) generating feasible and high-quality heuristic solutions in an end-to-end manner, and (2) improving the overall performance of traditional solvers to find high-quality solutions within a constrained time frame. We then conduct a case study to provide some additional insights into the optimization process of Diff ILO. All training and evaluations are performed using the same hardware configuration, specifically an Intel(R) Xeon(R) Gold 6246R CPU @ 3.40GHz, and an NVIDIA Ge Force RTX 3090 GPU. Code is available at https://github.com/MIRALab-USTC/L2O-Diff ILO. More experimental details can be found in Appendix C.
Researcher Affiliation Collaboration Zijie Geng1,2 , Jie Wang1,2 , Xijun Li3, Fangzhou Zhu4, Jianye Hao4,5, Bin Li1, Feng Wu1,2 1 University of Science and Technology of China 2 Mo E Key Laboratory of Brain-inspired Intelligent Perception and Cognition 3 Shanghai Jiao Tong University 4 Noah s Ark Lab, Huawei 5 Tianjin University EMAIL, EMAIL EMAIL, EMAIL
Pseudocode No The paper describes the steps in regular paragraph text and mathematical formulations (e.g., Section 3, Figure 2) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/MIRALab-USTC/L2O-Diff ILO.
Open Datasets Yes We evaluate our method mainly on three widely used ILP problem benchmarks: set covering (SC) (Balas & Ho, 1980), maximum independent set problem (IS) (Bergman et al., 2015), and combinatorial auctions (CA) (Leyton-Brown et al., 2000). The datasets are generated using the code from Gasse et al. (2019).
Dataset Splits Yes Following the settings described in the PS paper (Han et al., 2023), we generate 400 instances for each benchmark, 240 for training, 60 for validation, and 100 for testing, respectively.
Hardware Specification Yes All training and evaluations are performed using the same hardware configuration, specifically an Intel(R) Xeon(R) Gold 6246R CPU @ 3.40GHz, and an NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies Yes During inference, SCIP 8.1.0 (Achterberg, 2009), Gurobi 10.0.1 (Gurobi Optimization, 2021) are used for solving instances.
Experiment Setup Yes Table 4: Hyperparameters in our experiments. Hyperparameter SC IS CA Description embed size 32 32 32 The embedding size of the GNN predictor. depth 3 10 10 The depth of the GNN predictor. batch size 5 5 5 Number of ILP problems in each training batch. num samples 15 15 15 Number of sampled solutions for reparameterization. num epochs 1,200 1,200 2,400 Number of max running epochs. optimizer Adam Adam Adam Optimizer for training. learning rate 1e-4 8e-5 8e-5 Leaning rate for training. lr T 200 200 200 The period for learning rate cosine annealing. mu init 5.0 100.0 15 The initial value of µ. mu step 0.01 1.0 0.001 Step size for optimizing µ. cons targ 1.0 0.1 0.1 Target value of average constraint violation.