Flexible Tails for Normalizing Flows
Authors: Tennessee Hickling, Dennis Prangle
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show this approach outperforms current methods, especially when the target distribution has large dimension or tail weight. We concentrate on density estimation, investigating both synthetic and real data, but also include a small-scale variational inference experiment. We demonstrate improved empirical results for density estimation (synthetic and real data examples) and variational inference (an artificial target example) compared to standard NFs, and other NF methods for heavy tails. |
| Researcher Affiliation | Academia | 1School of Mathematics, University of Bristol, Bristol, UK. Correspondence to: Tennessee Hickling <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Sampling Student s T |
| Open Source Code | Yes | Code for all our examples can be found at https://github. com/Tennessee-Wallaceh/tailnflows. |
| Open Datasets | Yes | This section investigates density estimation for several real datasets with extreme values, covering insurance, financial and weather applications. Three are taken from Liang et al. (2022); Laszkiewicz et al. (2022) and one is novel to this paper. Appendix I has more information about the datasets and standard preprocessing applied before density estimation. ... Table 6: Real data sets information. Name Dimension Average ν Topic Source Insurance 2 2.17 Medical claims Liang et al. (2022) Fama 5 5 2.36 Daily returns of 5 major indices Liang et al. (2022) S&P 500 300 4.78 Daily returns of the 300 most traded US stocks Novel to our paper CLIMDEX 412 4.24 High dimensional meteorological data Laszkiewicz et al. (2022) |
| Dataset Splits | Yes | Each repeat samples a new set of data, with 5000 observations. which is split in proportion 40/20/40, to give training, validation and test sets respectively. ... The test set is comprised of observations after 2017-09-14, with train and validation sampled uniformly from the period up to and including this date. This corresponds to 1292 training, 645 validation and 1290 test observations respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | We use the nflows package (Durkan et al., 2020) to implement the NF models. This depends on Py Torch (Paszke et al., 2019) for automatic differentiation. |
| Experiment Setup | Yes | We train using the Adam optimiser with a learning rate of 5e-3. We use an early stopping procedure, stopping once there has been no improvement in validation loss in 100 epochs, and returning the model from the epoch with best validation loss. ... Table 5: Optimisation Hyperparameters. Experiment Group Learning Rate Batch Size Synthetic Density Estimation (Section 4.1) 5e-3 None (full pass) Real Data (Section 4.2) 5e-4 512 Variational Inference (Section 4.3) 1e-3 100 |