Anti-Diffusion: Preventing Abuse of Modifications of Diffusion-Based Models

Authors: Li Zheng, Liangbin Xie, Jiantao Zhou, Xintao Wang, Haiwei Wu, Jinyu Tian

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that our Anti-Diffusion achieves superior defense performance across a wide range of diffusion-based techniques in different scenarios. Based on both quantitative and qualitative results, our proposed method, Anti-Diffusion, achieves superior defense effects across several diffusion-based techniques, including tuning methods (such as Dream Booth/Lo RA) and editing methods (such as Masa Ctrl/Diff Edit).
Researcher Affiliation Collaboration 1University of Macau 2Shenzhen Institute of Advanced Technology 3Kuaishou Technology 4Macau University of Science and Technology
Pseudocode No The paper describes the overall framework and methodology, including problem definition, prompt tuning strategy, adversarial noise optimization, and UNet update. These are explained using text and a diagram (Figure 2), but there is no explicitly labeled 'Pseudocode' or 'Algorithm' block with structured steps.
Open Source Code Yes Code https://github.com/whulizheng/Anti-Diffusion
Open Datasets Yes To better evaluate the effectiveness of current defense methods against diffusion-based editing methods, in this work, we further construct a dataset, named Defense Edit. We hope this dataset can draw attention to the privacy protection challenges posed by diffusion-based image editing models. ... We contribute a dataset called Defense-Edit for evaluating the defense performance against editing-based methods. ... Specifically, we conduct experiments using the 100 unique identifiers (IDs) gathered from VGGFace2 (Cao et al. 2018) and Celeb A-HQ (Karras et al. 2017) datasets.
Dataset Splits No The paper mentions using 100 unique identifiers from VGGFace2 and Celeb A-HQ datasets and generating 16 images under 5 different seeds for evaluation. It also states following 'the dataset usage of the Anti-DB' for training Dream Booth/Lo RA models, but it does not specify explicit training/validation/test splits, percentages, or sample counts for the experiments conducted in this paper.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) required to replicate the experiments.
Experiment Setup Yes To ensure a fair comparison, following Anti-DB, we adopt the noise budget of η = 0.05 for all these methods. During the evaluation process, for each trained Dream Booth/Lo RA model, we generate 16 images under 5 different seeds, totaling 80 images, to evaluate the corresponding results, thereby eliminating the variability associated with a single seed.