Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Tensor Monte Carlo: Particle Methods for the GPU era
Authors: Laurence Aitchison
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that TMC is superior to IWAE on a generative model with multiple stochastic layers trained on the MNIST handwritten digit database, and we show that TMC can be combined with standard variance reduction techniques. |
| Researcher Affiliation | Academia | Laurence Aitchison University of Bristol Bristol, UK EMAIL |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described, nor does it explicitly state that code is being released. |
| Open Datasets | Yes | Finally, we do experiments on VAE s with multiple stochastic layers trained on the MNIST handwritten digit database. |
| Dataset Splits | No | The paper states training was done on the MNIST handwritten digit database, but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, and test sets. |
| Hardware Specification | Yes | The time required for computing marginal likelihood estimates in A on a single Titan X GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Adam optimizer' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | In all experiments, we used the Adam optimizer (Kingma & Ba, 2014) using the Py Torch default hyperparameters, and weight normalization (Salimans & Kingma, 2016) to improve numerical stability. We used leaky-relu nonlinearities everywhere except for the standard-deviations (Sรธnderby et al., 2016), for which we used 0.01+softplus(x), to improve numerical stability by ensuring that the standard deviations could not become too small. |