Laplace Meets Moreau: Smooth Approximation to Infimal Convolutions Using Laplace's Method

Authors: Ryan J. Tibshirani, Samy Wu Fung, Howard Heaton, Stanley Osher

JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To briefly highlight our contributions, we show that some recently proposed techniques for approximating Moreau envelopes and proximal operators, which have been motivated through a connection to PDEs, can be instead derived directly via self-normalized Laplace approximation. This allows us to extend the approximation technique to a broader class of problems, of infimal convolution form. We derive theory on the asymptotic validity of this approximation, which requires weaker conditions than the traditional analysis of Laplace approximation. We also present several example applications and numerical experiments. [...] In what follows, we walk through applications of the Laplace approximations proposed and studied above to problems in PDEs and optimization. [...] We focus on low-dimensional problems were sampling is fairly easy (naive Monte Carlo or importance sampling works fairly well). Higher-dimensional problems would call for more advanced sampling techniques.
Researcher Affiliation Collaboration Ryan J. Tibshirani EMAIL Department of Statistics University of California, Berkeley Berkeley, CA 94720, USA. Samy Wu Fung EMAIL Department of Applied Mathematics and Statistics Colorado School of Mines Golden, CO 80401, USA. Howard Heaton EMAIL Typal Academy Richland, WA 99352, USA. Stanley Osher EMAIL Department of Mathematics University of California, Los Angeles Los Angeles, CA 90095, USA.
Pseudocode No The paper describes algorithms and methods using mathematical equations and prose but does not include any clearly labeled pseudocode blocks or structured algorithm steps in a code-like format.
Open Source Code Yes An open-source repository with code to reproduce our experiments is available at https://github.com/mines-opt-ml/laplace-inf-conv.
Open Datasets No The paper uses synthetic benchmark functions (e.g., 'sphere', 'ellipsoidal', 'discus', 'rosenbrock', 'sharp ridge', 'weierstrass') and generates data for the Poisson linear inverse problem ('generate A R5 5 ++ by sampling its entries independently from a uniform distribution on [1, 2]. We generate x R5 by sampling its entries independently from a uniform on [5, 6], and randomly set half of these to 0. Then, we generate b R5 + by sampling its entries independently from Poisson distributions with means (A x)i, i = 1, . . . , 5.'). No external, publicly available datasets with access information are provided.
Dataset Splits No The paper primarily uses synthetic functions and generates data for its experiments, rather than using pre-existing datasets with established train/test/validation splits. Therefore, the concept of dataset splits in the traditional sense is not applicable or explicitly mentioned for reproducibility.
Hardware Specification No The paper does not explicitly describe any specific hardware used for running experiments, such as GPU or CPU models, memory details, or cloud computing instance types.
Software Dependencies No The paper mentions using 'PyTorch (Paszke et al., 2019)' and 'SciPy' but does not provide specific version numbers for these or any other software dependencies, which are necessary for reproducible descriptions.
Experiment Setup Yes Section 5.1: 'at 1000 uniformly sampled values of x [ 10, 10]d and t [10 1, 1].' 'We choose the proposal density q to be uniform over [ 10, 10]d.' 'over 50 repetitions.' Section 5.2: 'various choices of δ {10 4, 10 3, 10 2, 10 1, 1}, N {10, 102, 103, 104}'. 'fix λ = 1.' 'initialize all algorithms at x0 = (4, . . . , 4) R10, and average all results over 3 repetitions'. Section 5.3: 'set n = d = 5'. 'µ = 10 3 for the regularization parameter, η = 10 5 for the step size, δ = 2 10 3 for the level of noise, and N = 5 104 for the number of samples.'