Nested Expectations with Kernel Quadrature

Authors: Zonghao Chen, Masha Naslidnyk, Francois-Xavier Briol

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We then demonstrate empirically that our proposed method does indeed require fewer samples to estimate nested expectations on real-world applications including Bayesian optimisation, option pricing, and health economics. This fast rate is demonstrated numerically in Section 5, where we show that NKQ can provide significant accuracy gains in problems from Bayesian optimisation to option pricing and health economics. We now illustrate NKQ over a range of applications, including some where the theory does not hold but where we still observe significant gains in accuracy.
Researcher Affiliation Academia 1Department of Computer Science, University College London. 2Department of Statistical Science, University College London. Correspondence to: Zonghao Chen <EMAIL>.
Pseudocode No The paper describes the methodology using textual explanations and mathematical formulations, along with a high-level diagram in Figure 1. It does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures.
Open Source Code Yes The code to reproduce all experiments is available at https://github.com/hudsonchen/nest_kq.
Open Datasets No The paper uses synthetic examples and defines problem settings (e.g., Risk Management in Finance, Health Economics, Bayesian Optimization with specific functions like Dropwave, Ackley, Cosine8). While these problems are well-defined and often used in benchmarks, the paper does not provide concrete access information (links, DOIs, repositories, or formal citations to specific data files) for pre-collected datasets. The tasks are primarily based on problem definitions or functions rather than downloadable data collections.
Dataset Splits No For the synthetic and application-based experiments, the paper discusses sampling strategies and sample sizes (e.g., 'N = T = 0.5 for NKQ', 'initial starting data D0 consists of 2 points sampled uniformly'). However, it does not provide details on traditional training, validation, or test dataset splits, as most experiments involve either synthetic data generation or evaluation on defined functions/models rather than pre-existing partitioned datasets.
Hardware Specification No The paper does not provide specific hardware details such as CPU/GPU models, memory configurations, or type of computing clusters used for running the experiments. It only mentions that 'Dropwave, Ackley, and Cosine8 functions are synthetic and computationally cheap' without elaborating on the hardware.
Software Dependencies No The paper mentions software tools like 'Bo Torch' and 'Probnum' in the context of their usage, but it does not specify any version numbers for these or any other key software libraries or programming languages required for reproducibility.
Experiment Setup Yes For the synthetic experiment, hyperparameters include 'N = T = 0.5 for NKQ' and 'N = T = 1 for NMC', 'regularizers are set to λX = λ0,X N 2s X /d X (log N) 2s+2 /d X and λΘ = λ0,ΘT 2sΘ /dΘ following Theorem 1, where λ0,X , λ0,Θ are selected with grid search over {0.01, 0.1, 1.0}'. For Bayesian Optimization, it specifies 'prior is a Gaussian process with zero mean and Matern-0.5 covariance', 'q-expected improvement... with q = 2', 'N = T = 2 for NMC and N = T = 1 for NKQ'.