Tree-AMP: Compositional Inference with Tree Approximate Message Passing

Authors: Antoine Baker, Florent Krzakala, Benjamin Aubin, Lenka Zdeborová

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We introduce Tree-AMP, standing for Tree Approximate Message Passing, a python package for compositional inference in high-dimensional tree-structured models. ... In Section 4, we illustrate the package on a few examples. ... We compare the Tree-AMP performance on this inference task to the Bayes optimal theoretical prediction from (Barbier et al., 2019) to two state of the art algorithms for this task: Hamiltonian Monte-Carlo from the Py MC3 package (Salvatier et al., 2016) and Lasso (L1-regularized linear regression) from the Scikit-Learn package (Pedregosa et al., 2011).
Researcher Affiliation Academia Antoine Baker EMAIL Florent Krzakala EMAIL Laboratoire de Physique CNRS, Ecole Normale Sup erieure, PSL University Paris, France Benjamin Aubin EMAIL Lenka Zdeborov a EMAIL Institut de Physique Th eorique CNRS, CEA, Universit e Paris-Saclay Saclay, France
Pseudocode Yes Algorithm 1: Generic Tree-AMP algorithm; Algorithm 2: Expectation propagation in Tree-AMP (Gaussian beliefs); Algorithm 3: Tree-AMP algorithm for the teacher prior second moments; Algorithm 4: Tree-AMP State evolution (replica symmetric mismatched setting); Algorithm 5: Tree-AMP State evolution (Bayes-optimal setting)
Open Source Code Yes The source code is publicly available at https://github.com/sphinxteam/tramp and the documentation at https://sphinxteam.github.io/tramp.docs.
Open Datasets Yes Let us consider a signal x RN (with N = 784) drawn from the MNIST data set. We want to reconstruct the original image from a corrupted observation y = ϕ(x) RN
Dataset Splits Yes The above experiments have been performed with parameters (N, ρ, ) = (1000, 0.05, 0.01) and have been averaged over 100 samples. ... (upper) sparse DFT denoising with (N, ρ, ) = (100, 0.02, 0.1) and (lower) sparse gradient denoising with (N, ρ, ) = (400, 0.04, 0.01). ... (right-upper) Band-inpainting ϕinp,Iband α with α = 0.3 (right-lower) Uniforminpainting ϕinp,Iuni α with α = 0.5. ... MSE averaged over 25 instances of EP match perfectly the MSE predicted by SE.
Hardware Specification No The paper does not provide specific details about the hardware used for the experiments.
Software Dependencies No We compare the Tree-AMP performance on this inference task to the Bayes optimal theoretical prediction from (Barbier et al., 2019) to two state of the art algorithms for this task: Hamiltonian Monte-Carlo from the Py MC3 package (Salvatier et al., 2016) and Lasso (L1-regularized linear regression) from the Scikit-Learn package (Pedregosa et al., 2011). ... The Keras-VAE architecture is summarized in Figure 8 and the training procedure on the MNIST data set follows closely the canonical one detailed in (Keras-VAE).
Experiment Setup Yes The above experiments have been performed with parameters (N, ρ, ) = (1000, 0.05, 0.01) and have been averaged over 100 samples. ... ep.iterate(max_iter =200) ... (with ns = 1000 distribution samples and NUTS sampler) and Lasso (green) from Scikit-Learn (with the optimal regularization parameter obtained beforehand by simulation). ... (N, ρ, ) = (100, 0.02, 0.1) and (N, ρ, ) = (400, 0.04, 0.01). ... se.iterate(max_iter =200)