Revisiting Contrastive Divergence for Density Estimation and Sample Generation
Authors: Azwar Abdulsalam, Joseph G. Makin
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that a simple Conv Net can be trained with this method to be good at generation as well as density estimation for CIFAR-10, Oxford Flowers, and a synthetic dataset in which the learned density can be verified visually. |
| Researcher Affiliation | Academia | Azwar Abdulsalam EMAIL Elmore School of Electrical and Computer Engineering Purdue University Joseph G. Makin EMAIL Elmore School of Electrical and Computer Engineering Purdue University |
| Pseudocode | Yes | Algorithm 1: Hybrid training of EBM |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the release of source code for the methodology described. |
| Open Datasets | Yes | We demonstrate that a simple Conv Net can be trained with this method to be good at generation as well as density estimation for CIFAR-10, Oxford Flowers, and a synthetic dataset in which the learned density can be verified visually. |
| Dataset Splits | Yes | Data-initialized chains are run for L = 10, 000 steps, starting from samples from the test partition of the relevant data sets. All models are initialized at the same test-data samples to facilitate comparisons between them. |
| Hardware Specification | Yes | All models were trained for 10,000 iterations using a single V100 GPU. |
| Software Dependencies | No | The paper mentions using 'scipy' in Section A.1 but does not provide version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | For persistent and persistent+refresh initializations, we apply Langevin dynamics with a step size of ϵ = 0.05 and a temperature (see Section A.4) of T = 0.005. For data and hybrid initializations, we employ an adaptive step size: at each training iteration, the step size is set as ϵ = 0.0005/||E(x)||... All models were trained for 10,000 iterations using a single V100 GPU. |