ConceptPrune: Concept Editing in Diffusion Models via Skilled Neuron Pruning
Authors: Ruchika Chavhan, Da Li, Timothy Hospedales
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments across a range of concepts including artistic styles, nudity, and object erasure demonstrate that target concepts can be efficiently erased by pruning a tiny fraction, approximately 0.12% of total weights, enabling multi-concept erasure and robustness against various white-box and black-box adversarial attacks. |
| Researcher Affiliation | Collaboration | 1University of Edinburgh, 2Samsung AI Center, Cambridge |
| Pseudocode | No | The paper describes methods using natural language and equations, but does not include any explicitly labeled pseudocode or algorithm blocks with structured steps. |
| Open Source Code | Yes | Code available at https://github.com/ruchikachavhan/concept-prune |
| Open Datasets | Yes | The risks associated with large-scale text-to-image models arise from billion-sized web-scraped datasets used in training, comprising public datasets like LAION [Schuhmann et al., 2022], COYO [Byeon et al., 2022], and CC12M [Changpinyo et al., 2021]... We use the Inappropriate Prompts Dataset (I2P) [Schramowski et al., 2023]... We conducted experiments targeting Image Nette classes [Howard & Gugger, 2020], a subset of Image Net [Deng et al., 2009]... |
| Dataset Splits | No | The paper uses various datasets for evaluation (e.g., I2P, COCO30k, Image Nette classes, Winobias, and a custom set of 50 prompts per artist), but it does not specify any training/test/validation splits for these datasets. The method is training-free and applies to pre-trained models, with the mentioned datasets used for evaluation purposes rather than for training or splitting in the traditional sense. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, such as programming languages, libraries, or frameworks used in the implementation. |
| Experiment Setup | Yes | To calculate neuron activations, we run the model for 50 denoising iterations and fix the seed before every forward pass to ensure the same initializations for both reference and target concept prompts... As discussed in Section 4.1, we select two key hyperparameters sparsity level k% and ˆt for aggregating skilled neurons across time steps. For each concept, we vary the sparsity parameter k% between 0.5% and 5%, choosing the value that achieves the best trade-off between concept erasure and the retention of unrelated concepts. More details on this hyperparameter selection process can be found in Section A.3 of the appendix. The optimal sparsity levels k% and the corresponding ˆt values for each concept are outlined in Table 10 in the appendix. |