An Efficient Sampling Algorithm for Non-smooth Composite Potentials
Authors: Wenlong Mou, Nicolas Flammarion, Martin J. Wainwright, Peter L. Bartlett
JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Simulation results. In this section, we present some simulation results that compare the MAPLA algorithm with several other methods that have been proposed for the composite sampling problem. We run experiments on various instances of a Bayesian Lasso problem... Figure 1 shows the empirical acceptance rates of the four proposal distributions over a range of stepsizes. Figure 2, we show plots of the (estimated) mixing time proxy versus the stepsize. |
| Researcher Affiliation | Academia | Wenlong Mou EMAIL Department of EECS University of California, Berkeley Berkeley, CA, 94720, USA. Nicolas Flammarion EMAIL School of Computer and Communication Sciences EPFL CH-1015 Lausanne, Switzerland. Martin J. Wainwright EMAIL Department of EECS and Department of Statistics University of California, Berkeley Berkeley, CA, 94720, USA. Peter L. Bartlett EMAIL Department of EECS and Department of Statistics University of California, Berkeley Berkeley, CA, 94720, USA. |
| Pseudocode | Yes | Algorithm 1: Metropolis-Adjusted Proximal Langevin Algorithm (MAPLA) Require: Access to f, f, g, Oη,g(z), initial distribution p0. Parameter η. Ensure: Approximate sample from p e U. Sample X0 p0. for k = 0, 1, 2, do Draw sample Y k Oη,g(Xk η f(Xk)) using the Proximal sampling oracle. Y w.p. min 1, e U(Y k)p(Y k,Xk) e U(Xk)p(Xk,Y k) Xk otherwise . |
| Open Source Code | No | The paper does not contain any statement about code availability or a link to a code repository. |
| Open Datasets | No | Our simulations are based on the following data-generation procedure. For any vector x, let δx denote the atomic mass at x, and let p (0, 1) be a sparsity parameter. We draw samples as θ j i.i.d. p N(0, 1) + (1 p)δ0 for each j [d], and Xi i.i.d. N(0, Id), and Yi = X i θ + ξi for each i [n], where the noise sequence are given by ξi i.i.d. N(0, 1), independent of {Xi}n i=1. |
| Dataset Splits | No | The paper describes a data-generating procedure for simulations but does not specify train/test/validation splits of a given dataset. |
| Hardware Specification | No | The paper discusses simulation results but does not specify any hardware details used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | Our simulations are based on the following data-generating procedure. For any vector x, let δx denote the atomic mass at x, and let p (0, 1) be a sparsity parameter. We draw samples as θ j i.i.d. p N(0, 1) + (1 p)δ0 for each j [d], and Xi i.i.d. N(0, Id), and Yi = X i θ + ξi for each i [n], where the noise sequence are given by ξi i.i.d. N(0, 1), independent of {Xi}n i=1. We do so for problem instances constructed via the data-generating process from equation (18) with sparsity parameter p = 0.3, and problem dimension d = 500. We consider different settings of both the sample size n and the regularization parameter λ. A larger choice of λ places more emphasis on the component g, so that difficulties due to non-smoothness should become more visible in the behavior. Accordingly, we consider three different choices λ {5, 20, 80}, so as to see the impact of small, medium, and large non-smooth components. In parallel, we consider two different settings of the sample size: n = 1000 d = 500 in the over-determined, and n = 250 d in the under-determined setting. ... The empirical acceptance rate is taken as an average over a Markov chain trajectory of length m = 10000. ... In our simulation, we choose L = 10 and m = 10000. ... Note that the MYMALA algorithm by Durmus et al. (2018) also involves an additional tuning parameter ς. Based on the empirical performance, we choose ς = 3η for these comparisons, which appeared to optimize the acceptance rate. |