Tighter sparse variational Gaussian processes

Authors: Thang D Bui, Matthew Ashman, Richard E. Turner

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on regression benchmarks, classification, and latent variable models demonstrate that the proposed approximation consistently matches or outperforms standard sparse variational GPs while maintaining the same computational cost.
Researcher Affiliation Academia Thang D. Bui EMAIL School of Computing Australian National University Matthew Ashman EMAIL Department of Engineering University of Cambridge Richard E. Turner EMAIL Department of Engineering University of Cambridge
Pseudocode No The paper describes methods and equations but does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code Yes An implementation is made available in this repository https://github.com/thangbui/tighter_ sparse_gp.
Open Datasets Yes To build intuition about the proposed method s behaviour, we first evaluate it on a 1-D regression problem used by Snelson & Ghahramani (2005). We next compare four methods... on eight medium to large regression datasets... (Yang et al., 2015). To evaluate the performance of the proposed approximation on non-Gaussian likelihoods, we run an experiment on the MNIST digit classification task... We use the Boston housing dataset,2 vary the number of inducing points... https://www.cs.toronto.edu/~delve/data/boston/boston Detail.html Finally, we demonstrate the proposed method s applicability to latent variable models through experiments with Bayesian GPLVM on the oil flow dataset... (Bishop & James, 1993).
Dataset Splits No Section 6.3 states: 'We use the Matern-3/2 kernel and repeat each experiment 10 times, each employing a random train/test split.' However, it does not specify the exact percentages or counts for these splits, nor does it mention validation splits.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU model, CPU type, memory) used for running the experiments.
Software Dependencies No Implementations based on GPytorch and GPflow are released in this repository https://github.com/thangbui/tighter_ sparse_gp. The paper mentions GPytorch and GPflow but does not provide specific version numbers for these libraries.
Experiment Setup Yes We use the Matern-3/2 kernel and repeat each experiment 10 times, each employing a random train/test split. Figure 1 illustrates the optimisation trajectories of these methods and the final fits for both SGPR and T-SGPR using five inducing points. The final values for both uncollapsed and collapsed versions of the proposed bound appear tighter than that of the Titsias bound in practice. The learned hyperparameters reveal that T-SGPR prefers smaller observation noise (0.115) and larger kernel variance (0.107) compared to that of SGPR (0.126 and 0.087, respectively).