Geometrically Coupled Monte Carlo Sampling

Authors: Mark Rowland, Krzysztof M. Choromanski, François Chalus, Aldo Pacchiano, Tamas Sarlos, Richard E. Turner, Adrian Weller

NeurIPS 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare our new strategies against prior methods for improving sample efficiency, including quasi-Monte Carlo, by studying discrepancy. We explore our findings empirically, and observe benefits of our sampling schemes for reinforcement learning and generative modelling.
Researcher Affiliation Collaboration Mark Rowland University of Cambridge EMAIL Krzysztof Choromanski* Google Brain Robotics EMAIL François Chalus University of Cambridge EMAIL Aldo Pacchiano University of California, Berkeley EMAIL Tamás Sarlós Google Research EMAIL Richard E. Turner University of Cambridge EMAIL Adrian Weller University of Cambridge Alan Turing Institute EMAIL
Pseudocode Yes Algorithm 1 Antithetic inverse lengths coupling of Theorem 2.9
Open Source Code No The paper does not provide an explicit statement about releasing source code, nor does it include a link to a code repository for the methodology described.
Open Datasets Yes We train on MNIST, and report the average train and test ELBO after 50 epochs for a variety of sampling algorithms and numbers of samples K
Dataset Splits No The paper mentions 'train and test ELBO' but does not specify training, validation, and test dataset splits (e.g., percentages or exact counts) or refer to standard predefined splits that include validation.
Hardware Specification No The paper mentions 'a distributed environment on a cluster of machines' but does not provide specific hardware details like GPU/CPU models, memory, or cloud instance types used for experiments.
Software Dependencies No The paper mentions 'Bullet simulator' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We used hyperparameters applied on a regular basis in other Monte Carlo algorithms for policy optimization, in particular chose σ = 0.1 and η = 0.01, where σ stands for the standard deviation of the entries of Gaussian vectors used for MC and η is the gradient step size.