Dynamic Negative Guidance of Diffusion Models
Authors: Felix Koulischer, Johannes Deleu, Gabriel Raya, Thomas Demeester, Luca Ambrogioni
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of DNG class-removal on MNIST and CIFAR10, where we show that DNG leads to higher safety, preservation of class balance and image quality when compared with baseline methods. Furthermore, we show that it is possible to use DNG with Stable Diffusion to obtain more accurate and less invasive guidance than NP. |
| Researcher Affiliation | Academia | 1 IDLab, Ghent University 2 JADS, Tilburg University 3 Donders Institute for Brain Cognition and Behaviour, Radboud University |
| Pseudocode | Yes | Algorithm 1 Dynamic Negative Guidance Algorithm 2 Compute posterior |
| Open Source Code | Yes | Our implementation is available at https://github.com/Felix Koulischer/ Dynamic-Negative-Guidance.git |
| Open Datasets | Yes | The proposed algorithm is tested in the context of image generation on labelled datasets, in the present case MNIST and CIFAR10 are considered. [...] For CIFAR10 the pretrained model from Ho et al. (2020) is used. |
| Dataset Splits | No | No explicit dataset split information (percentages, sample counts, or methodology) for training, validation, or testing is provided beyond mentioning the use of training data for FID and evaluating on generated images for safety metrics. |
| Hardware Specification | No | The paper does not explicitly describe the hardware (e.g., specific GPU/CPU models, memory) used for conducting its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | Hyperparameter values for our Negative Guidance scheme for the different datasets are given in Table 1. A discussion explaining the choice of the different hyperparameters of our scheme is included in D.1. For Safe Latent Diffusion (Schramowski et al., 2023) a hyperparameter search was performed to obtain the values that perform best at high safety. The most important hyperparameter is the threshold value at which guidance is activated. |