Synthetic Data from Diffusion Models Improves ImageNet Classification
Authors: Shekoofeh Azizi, Simon Kornblith, Chitwan Saharia, Mohammad Norouzi, David J. Fleet
TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | augmenting the Image Net training set with samples from a generative diffusion model can yield substantial improvements in Image Net classification accuracy over strong Res Net and Vision Transformer baselines. To this end we explore the fine-tuning of large-scale text-to-image diffusion models, yielding class-conditional Image Net models with state-of-the-art FID score (1.76 at 256 256 resolution) and Inception Score (239 at 256 256). The model also yields a new state-of-the-art in Classification Accuracy Scores, i.e., Image Net test accuracy for a Res Net-50 architecture trained solely on synthetic data (64.96 top-1 accuracy for 256 256 samples, improving to 69.24 for 1024 1024 samples). |
| Researcher Affiliation | Industry | Shekoofeh Azizi, Simon Kornblith , Chitwan Saharia , Mohammad Norouzi , David J. Fleet Google Deep Mind EMAIL, davidfleet@google.com |
| Pseudocode | No | The paper describes the generative model training and sampling procedures, as well as the experimental methods, in prose without using structured pseudocode or algorithm blocks. |
| Open Source Code | No | Like the original Imagen model, our fine-tuned variant is not publicly available, in part to protect against the generation of harmful content. |
| Open Datasets | Yes | The Image Net ILSVRC 2012 dataset (Image Net-1K) comprises 1.28 million labeled training images and 50K validation images spanning 1000 categories (Russakovsky et al., 2015)... Imagen was trained on a mixture of datasets, 30% of which comprised Laion-400M (Schuhmann et al., 2021). |
| Dataset Splits | Yes | The Image Net ILSVRC 2012 dataset (Image Net-1K) comprises 1.28 million labeled training images and 50K validation images spanning 1000 categories (Russakovsky et al., 2015)... For CAS training and evaluation, we resize images to 256 256 (or, for real images, to 256 pixels on the shorter side) and then take a 224 224 pixel center crop. |
| Hardware Specification | Yes | The 64 64 base model is fine-tuned for 210K steps and the 64 64 256 256 super-resolution model is fine-tuned for 490K steps, on 256 TPU-v4 chips with a batch size of 2048. |
| Software Dependencies | No | The paper mentions optimizers like Adafactor (Shazeer & Stern, 2018) and Adam (Kingma & Ba, 2014), and samplers like DDPM (Ho et al., 2020) and DDIM (Song et al., 2021). However, it does not provide specific version numbers for any programming languages, libraries, or other ancillary software components used for the experiments. |
| Experiment Setup | Yes | The 64 64 base model is fine-tuned for 210K steps and the 64 64 256 256 super-resolution model is fine-tuned for 490K steps, on 256 TPU-v4 chips with a batch size of 2048... Models are trained for 90 epochs with a batch size of 1024 using SGD with momentum (see Appendix A.4 for details)... we selected guidance of 1.25 when sampling from the base model, and 1.0 for other resolutions. We use DDPM sampler (Ho et al., 2020) log-variance mixing coefficients of 0.0 for 64 64 samples, and 0.1 for 256 256 samples, with 1000 denoising steps. |