Tuning Sequential Monte Carlo Samplers via Greedy Incremental Divergence Minimization

Authors: Kyurae Kim, Zuheng Xu, Jacob R. Gardner, Trevor Campbell

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments. A representative subset of the results is shown in Figs. 2 and 3, while the full set of results is shown in App. F.1. Our Adaptive SMC sampler achieves more accurate estimates than the best-tuned end-to-end tuning results on Sonar and Brownian, while the estimate on Pines is comparable.
Researcher Affiliation Academia 1Dept. Computer and Information Science, University of Pennsylvania, Philadelphia, U.S. 2Dept. Statistics, University of British Columbia, Vancouver, Canada. Correspondence to: Kyurae Kim <EMAIL>, Zuheng Xu <EMAIL>, Trevor Campbell <EMAIL>, Jacob R. Gardner <EMAIL>.
Pseudocode Yes Algorithm 1: Adaptive Sequential Monte Carlo. Algorithm 2: Adapt Stepsize (L, t, hguess, δ, c, r, ϵ). Algorithm 3: Adapt KLMC (L, hguess, ρguess, δ, Ξ, c, r, ϵ).
Open Source Code Yes Our implemented SMC sampler1 using the Julia language (Bezanson et al., 2017). 1Link to GITHUB repository: https://github.com/Red-Portal/ControlledSMC.jl/tree/v0.0.4.
Open Datasets Yes For the benchmarks, we ported some problems from the Inference Gym (Sountsov et al., 2020) to Julia, where the rest of the problems are taken from Posterior DB (Magnusson et al., 2025). Details on the problems considered in this work are in App. A, while the configuration of our adaptive method is specified in App. B.
Dataset Splits No The paper refers to "particles" and "computational budgets" which relate to the SMC method itself, rather than standard machine learning dataset splits (e.g., train/test/validation). There is no information provided regarding how the benchmark datasets were partitioned for training, validation, or testing in the typical sense of supervised learning experiments.
Hardware Specification No We also gratefully acknowledge the use of the ARC Sockeye computing platform at the University of British Columbia. The paper acknowledges the use of a computing platform but does not specify any particular hardware components such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No We implemented our SMC sampler1 using the Julia language (Bezanson et al., 2017). Both methods are implemented in JAX (Bradbury et al., 2018), modified from the code provided by Geffner & Domke (2023). The paper mentions using "Julia language" and "JAX", but does not provide specific version numbers for these software components or any other libraries, which is required for a reproducible description of ancillary software.
Experiment Setup Yes The computational budgets are set as N = 1024, B = 128, and T = 64. In all cases, the reference distribution is a standard Gaussian q = N (0d, Id), while we use a quadratic annealing schedule λt = (t/T)2. Configuration of the Adaptation Procedure. Here, we collected the specifications of the tunable parameters in our adaptive SMC samplers. The parameters of SMC-LMC are set as in Table 4: ... The parameters of SMC-KLMC are set as in Table 5: For optimization, we used the Adam optimizer (Kingma & Ba, 2015) with three different learning rates {10 4, 10 3, 10 2} for 5,000 iterations, with a batch size of 32.