Langevin Monte Carlo Beyond Lipschitz Gradient Continuity

Authors: Matej Benko, Iwona Chlebicka, Jorgen Endal, Błażej Miasojedow

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we demonstrate the application of IPLA on 3 examples implemented in Python. We analyze the convergence rates and bias of the algorithm compared to the two known related LMC algorithms, namely TULA (Tamed Unadjusted Langevin Algorithm by Brosse et al. (2019)) and ULA (Unadjusted Langevin Algorithm by Durmus and Moulines (2019)).
Researcher Affiliation Academia 1Institute of Mathematics, Faculty of Mechanical Engineering, Brno University of Technology Technick a 2896/2, 616 69 Brno, Czech Republic 2 Institute of Applied Mathematics and Mechanics, University of Warsaw ul. Banacha 2, 02-097 Warsaw, Poland 3Department of Mathematical Sciences, Norwegian University of Science and Technology (NTNU) N-7491 Trondheim, Norway EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Inexact Proximal Langevin Algorithm (IPLA)
Open Source Code Yes Code https://github.com/192459/lmc-beyond-lipschitz-gradient-continuity
Open Datasets No The paper uses synthetic distributions for Examples 1 and 2, and for Example 3, it uses a high-resolution grayscale image credited to an artist, but no explicit dataset name, citation, or access link is provided for the image data itself. The citation is for the problem setup, not the image data.
Dataset Splits No The paper describes using a number of samples and a burn-in time for Monte Carlo simulations (e.g., 'We have run 10^5 samples with burn-in time 10^4.'), which is typical for sampling algorithms but does not constitute traditional training/test/validation dataset splits used in supervised learning.
Hardware Specification Yes We write the time of computations on standard Macbook Air M1 2020.
Software Dependencies No The paper mentions using 'the standard Python library Sci Py (Virtanen et al. 2020)' and 'the Py Proximal library (Ravasi et al. 2024)' but does not specify exact version numbers for these libraries, which is required for reproducibility.
Experiment Setup Yes Example 1: Distribution With Light Tails...We estimate the moments E|Y |2, E|Y |4, and E|Y |6 in dimension d = 103...We have run 105 samples with burn-in time 104...An initial value in the first of them is x0 = 7 1d (start in tail), while in the second one considered we start at x0 = 0 (start in the minimizer of V ). Each experiment has been repeated 100 times. Example 2: Ginzburg Landau Model...We consider κ = 0.1, ς = 0.5, υ = 2 and q = 5. The dimension in this example is d = q3 = 125. ...We have run 2 x 104 samples with burn-in time 104. We consider two scenarios as in Example 1. As starting in tail, we consider the case x0 = (100, 0, . . . , 0) and starting in the minimizer we take x0 = 0. Each experiment has been repeated 100 times. Example 3: Bayesian Image Deconvolution...uniform circular blur matrix has depth 9 and additive noise is of the standard deviation σ = 0.5...posterior mean obtained by IPLA for 105 iterations and with the desired precision of the inexact proximal step δ = 10-1.