Conditional Diffusion Models are Minimax-Optimal and Manifold-Adaptive for Conditional Distribution Estimation
Authors: Rong Tang, Lizhen Lin, Yun Yang
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have also conducted a simulation study (see Appendix A) to demonstrate the effectiveness of this theoretically guided neural network architecture compared to a standard single Re LU neural network (across both space and time). In this experiments, we consider cases where, given the covariate X, the response Y is supported on different (tilted) ellipses depending on the values of the covariate. Consistent with our theoretical findings, the simulations show that incorporating the piecewise structure into the neural network results in a more accurate estimation of the conditional distribution. |
| Researcher Affiliation | Academia | Rong Tang The Hong Kong University of Science and Technology EMAIL Lizhen Lin& Yun Yang University of Maryland, College Park EMAIL |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. Methodologies are described in paragraph text and mathematical formulations. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | No | The paper focuses on theoretical properties of diffusion models for distribution estimation and mentions a simulation study but does not specify any publicly available datasets or provide access information for any data used. |
| Dataset Splits | No | The paper does not mention any specific datasets or provide details on how data was split for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for experiments or simulations. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | No | The paper provides theoretical definitions for neural network classes and their sizes based on problem characteristics, but it does not specify concrete hyperparameters (e.g., learning rate, batch size) or training configurations for any practical experimental setup in the main text. Details for the mentioned simulation study are not included in the provided text. |