Provable Convergence Bounds for Hybrid Dynamical Sampling and Optimization

Authors: Matthew Burns, Qingyuan Hou, Michael Huang

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we provide non-asymptotic convergence guarantees for hybrid LNLS by reducing to block Langevin Diffusion (BLD) algorithms. Adapting tools from classical sampling theory, we prove exponential KL-divergence convergence for randomized and cyclic block selection strategies using ideal DXs. With finite device variation, we provide explicit bounds on the 2-Wasserstein bias in terms of step duration, noise strength, and function parameters. Our BLD model provides a key link between established theory and novel computing platforms, and our theoretical results provide a closed-form expression linking device variation, algorithm hyperparameters, and performance. ... 4 NUMERICAL EXPERIMENTS As an illustrative example, we simulated CBLD and RBLD behavior for d = 50 Gaussian sampling with β = 1 and uniform λi = λ1. Gaussian distributions permit closed-form solutions for DKL(N(u1, Σ1) N(u2, Σ2)), allowing for a quantitative estimate of convergence. ... Fig. 2a shows the convergence in DKL for the estimated distribution µt with block counts b {1, 2, 5, 10} versus simulated time.
Researcher Affiliation Academia Matthew X. Burns, Qingyuan Hou & Michael C. Huang Department of Electrical and Computer Engineering University of Rochester Rochester, NY 14527, USA EMAIL EMAIL
Pseudocode Yes Algorithm 1 Block Langevin Diffusion (BLD) ... Algorithm 2 Randomized Block Langevin Dynamics (RBLD) ... Algorithm 3 Randomized Block Langevin Monte Carlo (RBLMC) ... Algorithm 4 Cyclic Block Langevin Diffusion (CBLD) ... Algorithm 5 Cyclic Block Langevin Monte Carlo (CBLMC)
Open Source Code Yes Code and data can be found at https://github.com/ur-acal/Block Langevin.
Open Datasets No As an illustrative example, we simulated CBLD and RBLD behavior for d = 50 Gaussian sampling with β = 1 and uniform λi = λ1. Gaussian distributions permit closed-form solutions for DKL(N(u1, Σ1) N(u2, Σ2)), allowing for a quantitative estimate of convergence. ... The d = 50 Gaussian used to produce Fig. 3 was generated using the following procedure: 1. Generate a 50 50 matrix Σ 1 π with elements Unif[ 5, 5] 2. Make the matrix symmetric by setting Σ 1 π = 1 2(Σ 1 π + (Σ 1 π ) ) 3. To make Σπ positive definite, set Σ 1 π = Σ 1 + 1.2λmin I50
Dataset Splits No The paper simulates Gaussian sampling by generating a distribution and initializing 10^4 states. There is no mention of traditional training, validation, or test dataset splits, as it's a simulation rather than a model trained on a pre-existing dataset.
Hardware Specification Yes All code is implemented in Py Torch and was run on a desktop system using an i9-13900k with 64 GB of RAM and an RTX 4090 GPU.
Software Dependencies No All code is implemented in Py Torch and was run on a desktop system using an i9-13900k with 64 GB of RAM and an RTX 4090 GPU. ... For completeness we compute both W2 and DKL using ... and a Py Torch library function for DKL. The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with versioning.
Experiment Setup Yes As an illustrative example, we simulated CBLD and RBLD behavior for d = 50 Gaussian sampling with β = 1 and uniform λi = λ1. ... We simulate Langevin SDEs using an Euler-Maruyama integration scheme with a time step size of 1.6 10 11 seconds. ... We randomly initialize N = 104 states and evolve them in parallel with equivalent block selections. Every 30 iterations (time t) we compute the empirical covariance matrix for time Σt and the empirical mean ut to obtain the estimated µt,Est = N(ut, Σt).