A Proximal Algorithm for Sampling

Authors: Jiaming Liang, Yongxin Chen

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present a numerical example to illustrate our result. We consider sampling from a Gaussian-Laplace mixture... We run 500,000 iterations (with 100,000 burn-in iterations) for both the proximal sampling algorithm and LMC... Histograms and trace plots (of the 3-rd coordinate) of the samples generated by both methods are presented in Figures 1 and 2.
Researcher Affiliation Academia Jiaming Liang EMAIL Department of Computer Science, Yale University, New Haven, CT 06511. Yongxin Chen EMAIL School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332.
Pseudocode Yes Algorithm 1 Alternating Sampling Framework (Lee et al., 2021) Algorithm 2 RGO Rejection Sampling Algorithm 3 Accelerated Gradient Method
Open Source Code No The paper does not contain any explicit statement about releasing code or a link to a code repository. The link provided in the paper (https: // openreview. net/ forum? id= Ck XOwlhf27) is for the open review process, not for code.
Open Datasets No We consider sampling from a Gaussian-Laplace mixture ν(x) = 0.5(2π) d/2p det Q exp( (x 1) Q(x 1)/2) + 0.5(2d) exp( 4x 1) where Q = USU , d = 5, S = diag(14, 15, 16, 17, 18), and U is an arbitrary orthogonal matrix. The dataset used in the computational results is a synthetic mixture defined within the paper and not an external publicly available dataset with concrete access information.
Dataset Splits No The paper describes sampling from a Gaussian-Laplace mixture in the computational results section. It does not involve traditional datasets with training, validation, or test splits, as it is a sampling problem from a defined distribution, not a supervised or unsupervised learning task requiring data partitioning for model evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used to run the experiments or simulations.
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or solvers).
Experiment Setup Yes We run 500,000 iterations (with 100,000 burn-in iterations) for both the proximal sampling algorithm and LMC with η = 1/(Md) where d = 5 and M is as in (5) with (α, Lα) = (1, 27) and δ = 1. Histograms and trace plots (of the 3-rd coordinate) of the samples generated by both methods are presented in Figures 1 and 2. In addition, we also run 2,500,000 iterations (with 500,000 burn-in iterations) for LMC with η = 1/(5Md).