Regularized Langevin Dynamics for Combinatorial Optimization
Authors: Shengyu Feng, Yiming Yang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on three classic CO problems demonstrate that both of our methods can achieve comparable or better performance against the previous state-of-the-art (SOTA) SAand NN-based solvers. In particular, our SA algorithm reduces the runtime of the previous SOTA SA method by up to 80%, while achieving equal or superior performance. |
| Researcher Affiliation | Academia | 1Language Technologies Institute, Carnegie Mellon University. Correspondence to: Shengyu Feng <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Regularized Langevin Simulated Annealing Algorithm 2 Regularized Langevin Neural Network |
| Open Source Code | Yes | Our code is available at https://github.com/ Shengyu-Feng/RLD4CO. |
| Open Datasets | Yes | Benchmark datasets. Following Zhang et al. (2023), we evaluate MIS and MCl using Revised Model B (RB) graphs (Xu & Li, 2000), and we evaluate MCut using Barab asi Albert (BA) graphs (Barab asi & Albert, 1999). Following Qiu et al. (2022), we also include Erd os-R enyi (ER) graphs for MIS. |
| Dataset Splits | Yes | For RB and BA (both scales) and ER-[700 800], we use 1000 graphs for training, 100 for validation, and 500 (RB/BA) or 128 (ER-[700 800]) for testing. ER-[9000 11000] is reserved solely for testing (16 graphs). |
| Hardware Specification | Yes | We use two servers for the RLNN training, one with 8 NVIDIA RTX A6000 GPUs and the other with 10 NVIDIA RTX 2080 Ti GPUs. We evaluate all methods on the first server, using a single A6000 GPU. |
| Software Dependencies | No | The paper mentions software like Py Torch Geometric, Accelerate, PyTorch, Jax, and Gurobi, but does not provide specific version numbers for these software components. For example, it cites 'Py Torch (Paszke et al., 2017)' which refers to the paper describing PyTorch, not the version used in the experiments. |
| Experiment Setup | Yes | Tables 4 and 5 summarize the hyperparameters for RLSA and RLNN, respectively. We use random search to determine the initial temperature τ0 within [0.001, 10] and the regularized step size d within [2, 100]... In our experiment, we parameterize RLNN with a five-layer GCN (Kipf & Welling, 2017) with 128 hidden dimensions... We train RLNN with 50 epochs on all datasets... The batch size is set to 32 per GPU, and we optimize with Adam at a learning rate of 0.0001. |