Rényi Neural Processes
Authors: Xuesong Wang, He Zhao, Edwin V. Bonilla
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our approach across multiple benchmarks including regression and image inpainting tasks, and show significant performance improvements of RNPs in real-world problems. Our extensive experiments show consistently better loglikelihoods over state-of-the-art NP models. |
| Researcher Affiliation | Academia | 1CSIRO s Data61, Australia. Correspondence to: Xuesong Wang <EMAIL>. |
| Pseudocode | Yes | A.1. Pseudocode Algorithm 1 R enyi Neural Processes |
| Open Source Code | Yes | Our code is published at https://github.com/csiro-funml/renyineuralprocesses |
| Open Datasets | Yes | We evaluate the proposed method on multiple regression tasks: 1D regression [...] image inpainting [...] on three image datasets: MNIST, SVHN and Celeb A. [...] We also tested TND-D on the Extended MNIST dataset with 47 classes |
| Dataset Splits | Yes | The number of context points is randomly sampled M U(3, 50), and the number of target points is N U(3, 50 M) (Nguyen & Grover, 2022). We choose 100,000 functions for training, and sample another 3,000 functions for testing. [...] The number of context points for inpainting tasks is M U(3, 200) and the target point count is N U(3, 200 M). [...] We choose 20,000 functions for training, and sample another 1,000 functions for evaluation. [...] We use classes 0-10 for meta training and hold out classes 11-46 for meta testing under prior misspecification. |
| Hardware Specification | No | All the models can be trained using a single GPU with 16GB memory. |
| Software Dependencies | No | The paper does not explicitly mention any specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | We set α = 0.7 to train for VI-based RNPs and analogously α = 0.3 for ML-based baselines. [...] The number of samples K for the Monte Carlo is 32 for training and 50 for inference. [...] The input features were normalized to [ 2, 2]. [...] The input coordinates were normalized to [ 1, 1] and pixel intensities were rescaled to [ 0.5, 0.5]. [...] The noise level β is set as 0.3 for both training and testing. |