When GNNs meet symmetry in ILPs: an orbit-based feature augmentation approach

Authors: Qian Chen, Lei Li, Qian Li, Jianghua Wu, Akang Wang, Ruoyu Sun, Xiaodong Luo, Tsung-Hui Chang, Qingjiang Shi

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present numerical experiments to validate the effectiveness of the proposed approaches.
Researcher Affiliation Academia 1School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China 2Shenzhen International Center for Industrial and Applied Mathematics, Shenzhen Research Institute of Big Data, China 3School of Data Science, The Chinese University of Hong Kong, Shenzhen, China 4School of Artificial Intelligence, The Chinese University of Hong Kong, Shenzhen, China 5School of Software Engineering, Tongji University, Shanghai, China
Pseudocode Yes Algorithm 1 Orbit-based feature augmentation
Open Source Code Yes The source code is available at https://github.com/Net Sys Opt/GNNs Sym ILPs.
Open Datasets Yes BPP: The bin packing problem (BPP) is a well-known practical problem where items must be placed into bins without exceeding capacity limits. The objective is to minimize the total number of bins used. We generate 500 instances, each with 20 items, following the generation strategies outlined by Schwerin & W ascher (1997). These instances include 420 variables and 40 constraints, with an average of 14 orbits, and orbit cardinalities reaching up to 140. BIP: The balanced item placement problem (BIP) also involves assigning items to bins. However, unlike bin packing, the goal is to balance resource usage across bins. We use 300 instances from the ML4CO competition benchmarks (Gasse et al., 2022). These instances feature 1,083 variables and 195 constraints, with an average of 100 orbits. SMSP: The steel mill slab design problem (SMSP) is a variant of the cutting stock problem, where customer orders are assigned to slabs under color constraints, with the aim of minimizing total waste. We use 380 instances from (Schaus et al., 2011), which range between 22,000 and 24,000 variables and nearly 10,000 constraints. On average, these instances have 110 orbits, with orbit cardinalities varying between 111 and 1,000.
Dataset Splits Yes In our experiments, 60% of the instances are used for training, and the remaining 40% are reserved for validation.
Hardware Specification No No specific hardware details (like GPU/CPU models) are provided in the paper. The text only refers to computational effort and time without specifying the underlying hardware.
Software Dependencies No The paper mentions using ILP solvers SCIP and CPLEX, and symmetry detection tools Bliss and Nauty, but does not provide specific version numbers for any of these software components.
Experiment Setup Yes In the training configuration, we utilize the Adam optimizer with a learning rate of 0.0001 and a batch size of 8. All models are trained for 100 epochs, with the parameters corresponding to the lowest validation loss preserved for subsequent evaluation. Since all augmented features are randomly generated, multiple samples should be drawn for each training instance to mitigate overfitting. Accordingly, we sample 8 times for each training instance, while only a single sample is taken for each instance in the test set.