Quantum Optimization via Gradient-Based Hamiltonian Descent

Authors: Jiaqi Leng, Bin Shi

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical simulations on challenging problem instances demonstrate that gradient-based QHD outperforms existing quantum and classical methods by at least an order of magnitude. In addition to the theoretical analysis, we conduct a numerical study to evaluate the performance of gradient-based QHD in both convex and non-convex optimization. Our results show that gradient-based QHD achieves an enhanced performance compared to standard QHD and other prominent classical optimization algorithms. In some cases, gradient-based QHD yields solutions that are an order of magnitude better than those obtained by other methods.
Researcher Affiliation Academia 1Simons Institute for the Theory of Computing, University of California, Berkeley, USA 2Department of Mathematics, University of California, Berkeley, USA 3Center for Mathematics and Interdisciplinary Sciences, Fudan University, Shanghai, China 4Shanghai Institute for Mathematics and Interdisciplinary Sciences, Shanghai, China. Correspondence to: Jiaqi Leng <EMAIL>.
Pseudocode Yes We develop a quantum algorithm that simulates discrete-time gradient-based QHD to solve optimization problems (Algorithm 1). With a gate complexity linear in problem dimension d, this quantum algorithm is readily scalable to handle large-scale problems in practice. Algorithm 1 Gradient-based QHD with fixed step size
Open Source Code Yes The source code of the experiments is available at https://github.com/jiaqileng/Gradient-Based-QHD.
Open Datasets No The test problems used in this paper are defined as follows: 1. Styblinski-Tang function: f(x, y) = 0.2 x4 16x2 + 5x + y4 16y + 5y , ... 2. Michalewicz function: f(x, y) = sin(x) sin x2/π 20 sin(y) sin 2y2/π 20. ... 3. Cube-Wave function: f(x, y) = cos(πx)2 + 0.25x4 + cos(πy)2 + 0.25y4. ... 4. Rastrigin function: f(x, y) = x2 10 cos(2πx) + y2 10 cos(2πy) + 20.
Dataset Splits No The paper uses mathematical functions as test problems, which do not involve datasets or their splits for training, validation, or testing.
Hardware Specification Yes The numerical simulations of the quantum algorithms, including QHD and gradient-based QHD, are performed on a Mac Book equipped with an M4 chip.
Software Dependencies No Our Python implementation of the numerical algorithms can be found in the supplementary materials. The paper mentions Python but does not specify any version numbers for Python or any specific libraries used.
Experiment Setup Yes For all the subsequent experiments, we set δ = 1. All methods are executed with a fixed step size h = 0.2. For the quantum variants, the initial evolution time is set to t0 = 1. The parameters of gradient-based QHD are configured as α = 0.1, β = 0, and γ = 5. The numerical simulations of the quantum algorithms, including QHD and gradient-based QHD, are performed on a Mac Book equipped with an M4 chip. Additional details on the numerical methods employed are provided Appendix E.1. The step size varies with the test problems: We use h = 0.01 for the Styblinski-Tang and Michalewicz function, h = 0.02 for the Cube-Wave function, and h = 0.005 for the Rastrigin function. For the two quantum algorithms, the evolution time starts from t0 = 0. In gradient-based QHD, the parameters are set to α = 0.05, β = 0, and γ = 5. For SGDM: vk = ηkvk 1 (1 ηk)skgk, xk = xk + vk, where ηk = 0.5 + 0.4k/K , sk = s0 with s0 = 0.01, v0 = 0, and a uniformly random initial guess x0. For NAG: xk = yk 1 s f(yk 1), yk = xk + k 1 /k + 2(xk xk 1), where we choose y0 = 0 and a uniformly random initial guess x0. The step size is chosen as s = 0.01.