Gaussian Ensemble Belief Propagation for Efficient Inference in High-Dimensional, Black-box Systems

Authors: Dan MacKinlay, Russell Tsuchida, Daniel Pagendam, Petra Kuhnert

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we compare GEn BP against an alternative belief propagation method, Ga BP, and, for reference, a global Laplace approximation (Mackay, 1992). We use synthetic benchmarks designed to assess performance in high-dimensional, nonlinear dynamical systems. In both cases, the graph structure is a randomized system identification task (Appendix A.1), where a static parameter influences a noisily observed, nonlinear dynamical system.
Researcher Affiliation Academia Dan Mac Kinlay CSIRO s Data61 Dan.Mac EMAIL Russell Tsuchida Monash University Dan Pagendam CSIRO s Data61 Petra Kuhnert CSIRO s Data61
Pseudocode Yes Algorithm 1 Loopy Low-rank Belief Propagation over Factor Graph G Algorithm 2: GEn BP Algorithm 3: GEn BP fj xℓMessage (Single Incoming)
Open Source Code Yes Supporting code is available at github.com/danmackinlay/GEn BP.
Open Datasets No We use synthetic benchmarks designed to assess performance in high-dimensional, nonlinear dynamical systems. In both cases, the graph structure is a randomized system identification task (Appendix A.1), where a static parameter influences a noisily observed, nonlinear dynamical system.
Dataset Splits No The experiments described in the paper primarily use synthetic data generated by models like the 1D Transport Model and Navier Stokes System. While they mention running multiple simulations (e.g., "n = 10 runs", "n = 40 simulations", "n = 80 runs") for statistical robustness, and a temporal split for domain adaptation in Appendix B.3 ("GEn BP is applied to the first 5 time steps for domain adaptation"), there are no explicit training, validation, or test dataset splits in the conventional machine learning sense for pre-existing datasets.
Hardware Specification Yes Experiments were conducted on a Dell Power Edge C6525 Server with AMD EPYC 7543 32-Core Processors running at 2.8 GHz (3.7 GHz turbo) with 256 MB cache. Float precision is set to 64 bits, and memory usage is capped at 32 GB.
Software Dependencies No The paper mentions using "a PyTorch implementation from Li et al. (2020)" for the Navier Stokes equation. However, it does not specify the version number of PyTorch or any other software libraries, compilers, or operating systems used in the experiments.
Experiment Setup Yes Unless otherwise specified, the hyperparameters for both Ga BP and GEn BP are set to γ2 = 0.01 and σ2 = 0.001. GEn BP has additional hyperparameters η2 = 0.1 and ensemble size N = 64. We cap the number of message-propagation descent iterations at 150 and relinearise or re-simulate after every 10 steps.