Non-Equilibrium Dynamics of Hybrid Continuous-Discrete Ground-State Sampling
Authors: Timothee Leleu, Sam Reifenstein
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results show that MH-driven dynamics reach easy ground states more quickly, indicating a stronger bias toward these solutions in algorithms using reversible transition probabilities. To validate this, we construct a set of Ising problem instances with a controllable bias in the energy landscape that makes certain degenerate solutions more accessible than others. The constructed hybrid algorithm demonstrates significant improvements in convergence and ground-state sampling accuracy, achieving a 100x speedup on GPU compared to simulated annealing, making it well-suited for large-scale applications. |
| Researcher Affiliation | Collaboration | Timothée Guillaume Leleu NTT Research, Stanford University EMAIL EMAIL Sam Reifenstein NTT Research EMAIL |
| Pseudocode | Yes | In summary, the pseudo code of our approach is given in section S2 of the Appendix. ... S2 PSEUDO-CODE OF MHCACM Algorithm 1 Metropolis-Hastings adjusted chaotic amplitude control with momentum |
| Open Source Code | Yes | The code for our algorithm can be found on this online repository: Git Hub Repository. |
| Open Datasets | Yes | We construct a set of planted Ising problem instances that exhibit ground states with different tunable properties. The construction of these instances is inspired by the Wishart planted ensemble (WPE) (Hamze et al., 2020). ... The GSET (Ye, 2024) is a set of Max-Cut problem instance which are commonly used to benchmark Ising and Max-Cut solvers (Goto et al., 2021; Leleu et al., 2019; Benlic & Hao, 2013). To test the ability of MHCACm to find optimal solutions on larger optimization problems, we briefly present some results on these instances in comparison with a state of the art algorithm known as d SBM (Goto et al., 2021). Table 3 shows that MHCACm reaches best known solutions faster than d SBM (Goto et al., 2021) on the first few instances of the GSET problem set (Ye, 2024). ... Yinyu Ye. Gset: A collection of graphs, 2024. URL https://web.stanford.edu/~yyye/ yyye/Gset/. Accessed: 2024-05-22. |
| Dataset Splits | No | The paper describes generating and solving Ising problem instances (e.g., Wishart planted ensemble, GSET) which are combinatorial optimization problems, not typical machine learning datasets requiring training/test/validation splits. The concept of dataset splits for model training is not applicable or explicitly mentioned in the context of this work. |
| Hardware Specification | Yes | In table 4 we compare the wall-clock TTS of our Py Torch implementation of MHCACm finding ground states of a degenerate WPE instance of size N = 100. We also show the wall-clock time of a standard implementation of SA (Tiosue, 2024) running an CPU. MHCACm on GPU exhibits a 100x speed up in real time against simulated annealing on CPU for sampling easy optimal solutions. ... Table 4: Wall clock times of algorithms on different computing platforms. TTS is time to find any ground state of a degenerate WPE instance with αWPE = 0.8, N = 100 and bias b = 12. The GPU used is a Nvidia V100 and the CPU wall time is calculated on a 2024 Mac Book Pro (Apple M3 Chip). ... S7 COMPUTING ENVIRONMENT All simulations are done in CPU using a Intel Core i9-11950H, 8 cores, 2.6 GHz and GPU using a NVIDIA RTX A300; except for Table 3 of the main manuscript for which the GPU used is a Nvidia V100 and the CPU wall time is calculated on a 2024 Mac Book Pro (Apple M3 Chip). |
| Software Dependencies | No | The paper mentions that the code is written in 'python and pytorch' and that 'Py Torch' is used for GPU computations, but it does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The algorithms used for comparison are all tuned to nearly optimal parameters by minimizing the TTS. We use a state-of-the-art autotuning method called dynamic anisotropic smoothing (Reifenstein et al., 2024) (DAS) which is particularly well fitted to optimizing the parameters of heuristics for combinatorial optimization (Reifenstein et al., 2024). The use of a automatic parameter tuning also guarantees a fair comparison between algorithms (see Appendix S4). ... S4 AUTOTUNING USING DYNAMIC ANISOTROPIC SMOOTHING The parameters of the solvers are described in Table S1. We have run the autotuning algorithm called dynamic anisotropic smoothing (DAS) (Reifenstein et al., 2024) for each problem setting (problem size N, bias b). The MH sampling rate is set to f MH = 0.1. Table S1: Parameters of the algorithms tuned using DAS Algorithm number of parameters parameter list SA 3 βini, βfin, T AIM 4 β, α, γ, T CAC 5 β, α, ξ, a, T CACm 6 β, α, ξ, a, γ, T HMCACm 7 β, κ, α, ξ, a, γ, T |