Deep Generative Models: Complexity, Dimensionality, and Approximation

Authors: Kevin Wang, Hongqian Niu, Yixin Wang, Didong Li

JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we provide empirical evidence on three toy examples for visualization purposes of these deep generative networks abilities to learn space-filling curve approximations. We demonstrate the ability of the standard Re LU network to map the uniform distribution on the unit hypercubes of various dimensions [0, 1]m to a variety of target distributions, where m is the dimension of the input distribution.
Researcher Affiliation Academia Kevin Wang EMAIL Department of Biostatistics University of North Carolina at Chapel Hill Hongqian Niu EMAIL Department of Biostatistics University of North Carolina at Chapel Hill Yixin Wang EMAIL Department of Statistics University of Michigan Didong Li EMAIL Department of Biostatistics University of North Carolina at Chapel Hill
Pseudocode No The paper describes methods and proofs in prose and mathematical notation but does not contain a dedicated pseudocode or algorithm block.
Open Source Code Yes All code for experiments and generating paper figures can be found in the Git Hub repository at https://github.com/hong-niu/dgm24.
Open Datasets No The paper primarily uses synthetic 'toy examples' such as 'uniform distribution on the unit square [0, 1]2', 'uniform distribution on a 2-dimensional cylinder S2 R3', and 'uniform distribution on a unit cube [0, 1]3' for its simulations. These are internally generated distributions rather than external open datasets requiring explicit access information.
Dataset Splits No The paper describes dynamic sampling for its simulations, stating: 'during training the input is taken to be a random sample on the unit square at each iteration, while the target data is a uniform grid on the unit square, so that the input sample is not identical to the target data at any iteration.' However, it does not specify fixed training, validation, or test dataset splits with explicit percentages, counts, or predefined configurations.
Hardware Specification Yes All experiments were run on a single machine with an Nvidia RTX 4080 GPU with 16 GB of memory and adapted from the Python Optimal Transport (POT) package Flamary et al. (2021) for computing the Wasserstein distance.
Software Dependencies No The paper mentions adapting code from 'the Python Optimal Transport (POT) package Flamary et al. (2021) for computing the Wasserstein distance.' However, it does not specify the version number of the POT package or any other software dependencies like Python or PyTorch.
Experiment Setup Yes In Figure 2, we show that training a fully connected feed forward network with just 2 hidden layers with 10 nodes each for 10,000 iterations under Wasserstein loss, again from the Python Optimal Transport (OT) package, achieves low Wasserstein loss as well as low average empirical fill distance.