Density Ratio Estimation with Conditional Probability Paths

Authors: Hanlin Yu, Arto Klami, Aapo Hyvarinen, Anna Korba, Omar Chehab

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To benchmark the accuracy of our CTSM objectives, we closely follow the experimental setup of Rhodes et al. (2020) and Choi et al. (2022) and also provide further experiments. ... Overall, these experiments show that vectorized CTSM achieves competitive or better performance to TSM but is orders of magnitude faster, especially in higher dimensions.
Researcher Affiliation Academia 1University of Helsinki, Finland 2ENSAE, CREST, IP Paris, France.
Pseudocode No The paper describes its methodology using mathematical formulations and textual explanations, but it does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Our code is available at https://github.com/ksnxr/dre-prob-paths.
Open Datasets Yes We consider Energy-based Modeling (EBM) tasks on MNIST (Le Cun et al., 2010).
Dataset Splits Yes For experiments apart from EBM, for each task we employ a fixed validation set of size 10000 and select the learning rates based on results on the sets. After a certain number of steps, an evaluation step is performed, and the model is evaluated based on both the validation set and a test set, consisting of 10000 samples dynamically generated based on the data generation process. ... on MNIST test set with batch size 1000
Hardware Specification Yes With Gaussian flows, all experiments were run using one NVIDIA V100 GPU each. With ambient space, the models were trained and evaluated using one NIVDIA A100 GPU each, while the running times were obtained based on 10000 steps using one NVIDIA V100 GPU each.
Software Dependencies No The paper mentions that "density ratios are evaluated using the initial value problem ODE solver as implemented in Sci Py (Virtanen et al., 2020)", but it does not specify the version number of SciPy or any other key software component used in the experiments.
Experiment Setup Yes The learning rate is tuned between [5e 4, 1e 3, 2e 3, 5e 3, 1e 2]. For CTSM-v, we largely reuse the hyperparameters, while tuning the step size between [5e 4, 1e 3, 2e 3]. For TSM, we tune the step size between [2e 4, 5e 4, 1e 3]. ... All models are trained for 20000 iterations. ... we use a batch size of 500