Learning conditional distributions on continuous spaces

Authors: Cyril Benezet, Ziteng Cheng, Sebastian Jaimungal

JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical findings demonstrate that, with a suitably designed structure, the neural network has the ability to adapt to a suitable level of Lipschitz continuity locally. For reproducibility, our code is available at https://github.com/zcheng-a/LCD_kNN. ...In Section 3.2, we evaluate the performance of the trained P θ, denoted by P Θ, using three sets of synthetic data in 1D and 3D spaces.
Researcher Affiliation Academia Cyril B en ezet EMAIL Universit e Paris-Saclay, CNRS, Univ Evry, ens IIE Laboratoire de Math ematiques et Mod elisation d Evry, 91037, Evry-Courcouronnes, France Ziteng Cheng EMAIL Financial Technology Thrust The Hong Kong University of Science and Technology (Guangzhou) Guangzhou, 511400, China Sebastian Jaimungal EMAIL Department of Statistical Sciences University of Toronto Toronto, ON M5G 1Z5, Canada
Pseudocode Yes Algorithm 1 Deep learning conditional distribution in conjunction with k-NN estimator ...Algorithm 2 Power iteration with momentum for updating W 2 estimate, applied to all convex potential layers simultaneously at every epoch during training
Open Source Code Yes For reproducibility, our code is available at https://github.com/zcheng-a/LCD_kNN.
Open Datasets No We consider data simulated from three different models. ...We generate 10^4 samples for Models 1 and 2. ...For Model 3, we generate 10^6 samples and select k = 300.
Dataset Splits No The paper generates data from three different models and does not explicitly define traditional training/test/validation splits on a fixed dataset. Instead, it mentions generating data and using randomly selected query points for training objectives, as seen in Algorithm 1: 'generate a query point Xn Uniform(X)'.
Hardware Specification Yes Table 1: This table compares the execution times for 500 runs of exact NNS versus ANNS-RBSP, both utilizing parallel computing, facilitated by Py Torch, with an NVIDIA L40 GPU. ...Table 5: All times were obtained from a machine equipped with an Nvidia L40 GPU.
Software Dependencies No The paper mentions "Py Torch" in Table 1 and "Adam optimizer Kingma and Ba (2017)" in Section 3.2 and Table 4. However, it does not provide specific version numbers for PyTorch, Adam, or any other software dependencies, which are necessary for full reproducibility.
Experiment Setup Yes Table 4 Hyper-parameters Hyper-parameters Configuration Note Sample size 1e4 for Model 1 & 2, 1e6 for Model 3 k 100 for Model 1 & 2, 300 for Model 3 See Definition 9 Network stucture Std Net: Layer-wise residual connection He et al. (2016), batch normalization (Ioffe and Szegedy (2015)) after affine transformation ...Number of episodes 5e3 for Model 1 & 2, 1e4 for Model 3