Graph-Supported Dynamic Algorithm Configuration for Multi-Objective Combinatorial Optimization

Authors: Robbert Reijnen, Yaoxin Wu, Zaharah Bukhsh, Yingqian Zhang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on diverse MOCO challenges indicate that our method outperforms traditional and DRL-based algorithm configuration methods in terms of efficacy and adaptability.
Researcher Affiliation Academia 1Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, The Netherlands. Correspondence to: Yaoxin Wu <EMAIL>.
Pseudocode No The paper does not contain a clearly labeled pseudocode or algorithm block. The methodology is described in text and mathematical formulations.
Open Source Code Yes All code and experimental details are made publicly available 1. 1https://github.com/Robbert Reijnen/ GS-MODAC
Open Datasets Yes For FJSP, we generate train and test instances for three distinct problem sizes: 1) 5 jobs and 5 machines (5j5m), 2) 10 jobs and 5 machines (10j5m), and 3) 25 jobs and 5 machines (25j5m), following the instance generation configuration of Song et al. (2022). For CVRP, we generate 3 distinct sizes of 100, 200, and 500 customers, according to the instance generation of da Costa et al. (2021).
Dataset Splits Yes We generate 200 instances for each size, consisting of 100 instances for training and 100 for testing.
Hardware Specification Yes The training process... It was carried out on an Intel(R) Core(TM) i7-6920HQ CPU @ 2.90GHz with 8.0GB of RAM and five parallel environments.
Software Dependencies No The paper mentions using the Proximal Policy Optimization (PPO) algorithm and Graph Convolutional Networks (GCN) but does not specify version numbers for any software dependencies or libraries used.
Experiment Setup Yes The action space for NSGA-II is defined as two continuous actions with ranges 0.6, 1.0 and 0.0, 0.1 for the crossover and mutation rates of NSGA-II. The training process involved 1.000,000 steps for the scheduling problems and 2.500.000 steps for the routing, configured with 50 generations of search and a population size of 50. The training process spans 2000 epochs with 500 steps per epoch. The model parameters are set as Schulman et al. (2017), and the network layers are configured with 64 nodes.