A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models

Authors: Shivam Kumar, Yun Yang, Lizhen Lin

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we explore the theoretical properties of conditional deep generative models under the statistical framework of distribution regression... Our results lead to the convergence rate of a sieve maximum likelihood estimator (MLE)... Finally, in our numerical studies, we demonstrate the effective implementation of the proposed approach using both synthetic and realworld datasets, which also provide complementary validation to our theoretical findings.
Researcher Affiliation Academia 1Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, USA 2Department of Mathematics, University of Maryland, College Park, USA.
Pseudocode No The paper describes algorithms and models (e.g., conditional variational auto-encoder architecture, Multr m, Monr m,γ) in paragraph text and mathematical notation, but it does not contain any clearly labeled pseudocode or algorithm blocks with structured, step-by-step procedures.
Open Source Code No The paper does not provide any explicit statement about releasing code or a link to a code repository.
Open Datasets Yes We utilized the widely used MNIST dataset for two purposes: to demonstrate the generalizability of our approach to a benchmark image dataset...
Dataset Splits Yes The sample size used for simulation is 5000, with a training-to-testing ratio of 4 : 1... We utilized a sample size of 5000 for simulation, with a training-to-testing ratio of 4 : 1.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper describes the neural network architectures and experimental settings but does not specify any software libraries or frameworks with their version numbers.
Experiment Setup Yes The neural architecture for both the encoder and decoder consists of two deep layers, i.e., L = 2. The hyperparameters are as follows: renc = (p + 1, 10, 10) for µϕ and Σϕ, and rdec = (10 + p, 10, 1) for g. ... We employ a batch size of 64 with a learning rate of 10^-3.