Reparameterization invariance in approximate Bayesian inference
Authors: Hrittik Roy, Marco Miani, Carl Henrik Ek, Philipp Hennig, Marvin Pförtner, Lukas Tatzel, Søren Hauberg
NeurIPS 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, our diffusion consistently improves posterior fit, suggesting that reparameterizations should be given more attention in Bayesian deep learning. |
| Researcher Affiliation | Academia | Hrittik Roy , Marco Miani Technical University of Denmark EMAIL Carl Henrik Ek University of Cambridge, Karolinska Institutet EMAIL Philipp Hennig, Marvin Pförtner, Lukas Tatzel University of Tübingen, Tübingen AI Center EMAIL Søren Hauberg Technical University of Denmark EMAIL |
| Pseudocode | Yes | Algorithm 1 Laplace diffusion |
| Open Source Code | Yes | Code: https://github.com/h-roy/geometric-laplace. |
| Open Datasets | Yes | We train a 44,000-parameter Le Net(Le Cun et al., 1989) on MNIST and FMNIST as well as a 270,000-parameter Res Net(He et al., 2016) on CIFAR-10(Krizhevsky et al., 2009). |
| Dataset Splits | No | The paper mentions 'held-out test set' but does not explicitly specify validation dataset splits or how they were derived for reproduction. |
| Hardware Specification | Yes | We run the sampling algorithm on H100 GPUs to run the high-order Lanczos decomposition. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' and 'SGD' but does not specify software versions for libraries or frameworks like PyTorch, TensorFlow, or Python. |
| Experiment Setup | Yes | We train Le Net with Adam optimizer and a learning rate of 10 3. For the Re Net we use SGD with a learning rate of 0.1 with momentum and weight decay. |