Learning Likelihood-Free Reference Priors
Authors: Nicholas George Bishop, Daniel Jarne Ornia, Joel Dyer, Ani Calinescu, Michael J. Wooldridge
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate that good approximations to reference priors for simulation models are in this way attainable, providing a first step towards the development of likelihood-free objective Bayesian inference procedures. ... Here, we present a series of experiments to assess the RP-learning methods described in Section 4. |
| Researcher Affiliation | Academia | 1University of Oxford. Correspondence to: Nicholas Bishop <EMAIL>, Daniel Jarne Ornia <EMAIL>, Joel Dyer <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Flow Pretraining Procedure pretrain Algorithm 2 Training Procedure with Variational Lower Bounds Algorithm 3 Flow Pretraining Procedure pretrain-conditional Algorithm 4 Training for GED |
| Open Source Code | Yes | Code available at https://github.com/joelnmdyer/lf_reference_priors. |
| Open Datasets | Yes | We next consider the popular SBI benchmark task SLCPD (Lueckmann et al., 2021), based on the experiment first introduced by Papamakarios et al. (2019). ... The g-and-k model appears frequently as a benchmark case study for SBI methods (see, e.g., Fearnhead & Prangle, 2012). |
| Dataset Splits | No | The paper describes generating data from simulators (e.g., 'n samples are generated iid from N(µ, σ2)' or 'iid data is generated for t = 1, . . . , n') rather than using predefined splits of a static dataset. Therefore, specific train/test/validation dataset splits are not provided in the conventional sense. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions 'Py Torch: An Imperative Style, High-Performance Deep Learning Library, 2019.' and 'Adam (Kingma, 2014)'. While PyTorch is a key software component, '2019' is a publication year, not a specific version number. No other software components are mentioned with specific version numbers. |
| Experiment Setup | Yes | Table 4. Hyperparameter settings for Info NCE and SMILE experiments. Table 6. Hyperparameter settings for GED. |