Piecewise deterministic sampling with splitting schemes

Authors: Andrea Bertazzi, Paul Dobson, Pierre Monmarché

JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we illustrate promising results for our samplers with numerical experiments on a Bayesian imaging inverse problem and a system of interacting particles.
Researcher Affiliation Academia Andrea Bertazzi EMAIL Centre de math ematiques appliqu ees Ecole Polytechnique Paul Dobson p.dobson EMAIL Department of Mathematics and Computer Science Heriot-Watt University and Maxwell Institute for Mathematical Sciences Pierre Monmarch e EMAIL Laboratoire Jacques-Louis Lions and Laboratoire de Chimie Th eorique Sorbonne Universit e
Pseudocode Yes Algorithm 1: Splitting scheme DBD for ZZS Algorithm 2: Splitting scheme RDBDR for BPS Algorithm 3: Non-reversible Metropolis adjusted ZZS Algorithm 4: Non-reversible Metropolis adjusted BPS
Open Source Code Yes The codes for all these experiments can be found at https://github.com/andreabertazzi/splittingschemes_PDMP.
Open Datasets No In this section we test the unadjusted ZZS (Algorithm 1) on an imaging inverse problem, which we solve with a Bayesian approach... In Section 6.1 we give a concrete example of the latter situation, showing how in a high dimensional setting the Poisson thinning approach makes the exact simulation of a PDMP prohibitive even when the negative log-target distribution is gradient Lipschitz (see Equation (31) for more details on the bounds, which in the considered case have efficiency that decreases polynomially with the dimension of the process).
Dataset Splits No The paper describes a Bayesian imaging inverse problem and a system of interacting particles. These are not typically evaluated using training/test/validation splits in the traditional sense of supervised learning. The image deconvolution problem defines a posterior distribution for x given y (p(x|y)), and the goal is to draw samples from it. For the chain of interacting particles, it's a simulated system without conventional dataset splits.
Hardware Specification No The paper does not mention any specific hardware (e.g., GPU models, CPU types, memory amounts) used for running its experiments. It only discusses the computational cost of operations like 'gradient evaluation' and mentions 'high dimensional settings'.
Software Dependencies No The paper mentions using the 'SAPG algorithm (Vidal et al., 2020; De Bortoli et al., 2020)' and 'Chambolle-Pock algorithm (Chambolle and Pock, 2011)' for specific tasks, but it does not specify any version numbers for these algorithms or any other key software components (e.g., programming languages, libraries, frameworks).
Experiment Setup Yes The values of the parameters in the considered example are summarised in Table 1. We are interested in drawing samples from the distribution (30), and in particular we compare the unadjusted ZZS (Algorithm 1, abbreviated as UZZS in the plots), ULA, the continuous ZZS, as well as the discretization of the underdamped Langevin Algorithm considered by Sanz-Serna and Zygalakis (2021), which is a strongly second order accurate integrator abbreviated as UBU in the plots. To implement the continuous ZZS we can compute the Lipschitz constant of the gradient of the negative log-posterior, L, and thus we can implement the exact ZZS using the Poisson thinning technique... For ULA we use a step size of L-1, for UZZS... we use 2L-1/2 as the step size. For UBU the performance based on theoretical guarantees is not competitive since this algorithm scales poorly with the conditioning number (Cheng et al., 2018; Durmus and Moulines, 2016), which is very large in these examples. However we have implemented UBU with a friction parameter γ = 2, which is outside the range covered by the theoretical guarantees, and with step size L-1/2. ...The parameters of all samplers are obtained by performing a grid search over the parameter space. It is clear from Figure 5 that ZZS gives cheaper and more accurate estimates of the mean and variance of the empirical variance in both cases considered. In particular, we see that ZZS clearly outperforms HMC and BPS. This holds even though the calibration of ZZS is considerably simpler since it involves only the tuning of the step size, while for HMC and BPS one has to tune additionally respectively K and M, and λr.