Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Turning Normalizing Flows into Monge Maps with Geodesic Gaussian Preserving Flows
Authors: Guillaume Morel, Lucas Drumetz, Simon Benaïchouche, Nicolas Courty, François Rousseau
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply GP flows on two popular NF models: BNAF, a discrete NF (De Cao et al., 2020) for two-dimensional test cases and FFJORD, a continuous NF (Grathwohl et al., 2018) for higher dimensional cases. Both of these models are solid references among NF and do not incorporate any OT knowledge in their architecture or training procedure. The codes are taken from the official repositories1. The FFJORD model has an inverse function directly available in the code, which is not the case for the BNAF model. For this reason we consider only the FFJORD model when interpolating in the latent space of the d Sprites and MNIST datasets because interpolations require the NF architecture to have an inverse function available. To compare our results we consider the CP-Flow architecture (Huang et al., 2020). |
| Researcher Affiliation | Academia | Guillaume Morel EMAIL IMT Atlantique, La TIM, U1101, Brest, France Lucas Drumetz EMAIL IMT Atlantique, Lab-STICC, UMR CNRS 6285, Brest, France Simon Benaichouche EMAIL IMT Atlantique, Lab-STICC, UMR CNRS 6285, Brest, France Nicolas Courty EMAIL Université Bretagne sud, IRISA, UMR CNRS 6074, Vannes, France. François Rousseau EMAIL IMT Atlantique, La TIM, U1101, Brest, France |
| Pseudocode | No | The paper includes mathematical propositions, theorems, and proofs but does not provide any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available here https://github.com/morel-g/GPFlow. |
| Open Datasets | Yes | Data structure preservation with optimal transport. Finally we show one potential interest of GP flows by studying the preservation of the data structure experimentally. More specifically we focus on the preservation of disentanglement on the d Sprites (Matthey et al., 2017), MNIST (Lecun et al., 1998) and Chairs (Mathieu et al., 2014) datasets in some variational auto-encoder (VAE) latent space. |
| Dataset Splits | Yes | We consider a training set of 80K samples and a testing set of 20K samples, 15 time steps with a Runge-Kutta 4 time discretization and a GP flow with two intermediate layers of size 15. For the Euler penalization we take the exact same parameters with λ = 5 10 4 which is divided periodically by a factor 2 the number of division is given in table 3. |
| Hardware Specification | Yes | We run the experiments on two separate GPUs: a NVIDIA Quadro RTX 8000 and a NVIDIA TITAN X. |
| Software Dependencies | No | The paper mentions "torch.autograd from pytorch" and the "POT library Flamary et al. (2021)" but does not provide specific version numbers for these software components. It also mentions using "FFJORD" and "BNAF" models from their official repositories without specifying the exact versions. |
| Experiment Setup | Yes | Table 3: Parameters used for the training of GP flows on the 2D toy examples. Table 4: Parameters used for the training of GP flows. Table 5: Architectures used for CPFlow for the d Sprites, MNIST and chairs datasets. |