Implicit Langevin Algorithms for Sampling From Log-concave Densities

Authors: Liam Hodgkinson, Robert Salomone, Fred Roosta

JMLR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical examples supporting our theoretical analysis are also presented.
Researcher Affiliation Academia Liam Hodgkinson EMAIL Department of Statistics, UC Berkeley, Berkeley, CA, 94720, USA International Computer Science Institute, Berkeley, CA, 94704, USA Robert Salomone EMAIL Centre for Data Science, Queensland University of Technology, Brisbane, QLD, 4001, Australia Fred Roosta EMAIL School of Mathematics and Physics, The University of Queensland, St Lucia QLD 4067, Australia International Computer Science Institute, Berkeley, CA 94704, USA
Pseudocode Yes Algorithm 1: Implicit Langevin Algorithm (ILA) ... Algorithm 2: Inexact Implicit Langevin Algorithm (i-ILA)
Open Source Code No The paper describes Algorithm 1 and Algorithm 2 but does not provide any explicit statement about releasing the source code or a link to a repository.
Open Datasets Yes We use the musk (version 1) data set from the UCI repository (Dua and Graff, 2019)
Dataset Splits No The paper uses the musk (version 1) dataset to define the posterior density for Bayesian logistic regression. It does not describe any training/test/validation splits of this dataset, as the experiments involve sampling from the defined posterior, not training a model on splits.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers used for the implementation or experiments.
Experiment Setup Yes MMTV and MMD discrepancies were computed between π and samples of N = 5000 points generated by Algorithm 1 with θ {0, 1/2, 1} and a variety of step sizes h (encompassing 4/M and the step size heuristics in 4). Common random numbers were used, and no burn-in period was applied. ... discrepancies were computed between samples of N = 10000 points generated by Algorithm 2 (with θ {0, 1/2, 1}, ϵ = 10 9 and a variety of step sizes h encompassing 4/M and the step size heuristics in 4 under the assumption that eigenvalues are distributed according to Equation 19) and a gold standard run comprised of 50,000 samples obtained from hand-tuned SMMALA (Girolami and Calderhead, 2011). ... Common random numbers were used, and no burn-in period was applied.