Preventing Latent Diffusion Model-Based Image Mimicry via Angle Shifting and Ensemble Learning

Authors: Minghao Li, Rui Wang, Ming Sun, Lihua Jing

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that the alternating iterative framework and the stable optimization strategy on cosine similarity loss are more efficient and more effective. 5 Experiments 5.1 Setup Datasets We evaluate our methods on two datasets.
Researcher Affiliation Academia Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences EMAIL
Pseudocode No The paper describes methods and a pipeline diagram (Figure 4), but does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code is available at https://github.com/Minghao Li01/cosattack.
Open Datasets Yes We evaluate our methods on two datasets. Considering that infringement issues mainly occur on human faces and artworks, we use a subset of the Celeb A-HQ [Karras et al., 2018] and a subset of Wiki Art [Nichol, 2016] respectively.
Dataset Splits No We randomly select 500 face images from Celeb A-HQ. The Wiki Art dataset contains artworks from 27 different styles. We randomly selected 20 images from each style of artworks. The text describes how images were *selected* for evaluation but does not specify standard training/validation/test splits of a dataset for model training.
Hardware Specification Yes we conduct experiments on NVIDIA Ge Force GTX 1080Ti with 12G VRAM.
Software Dependencies No The paper mentions tools like SDEdit, SD-v1-4, and DDIM100 but does not provide specific version numbers for the software dependencies used in their implementation (e.g., Python, PyTorch versions).
Experiment Setup Yes Following the existing research, we use the ℓ -norm to constrain the generated adversarial examples, with the constraint range as 8/255 and the step size α = 1/255. To facilitate the exploration of the impact of the grouping strategy, we set the number of iterations K = 100 for all the methods. For grouping strategy, we set N M = 20 5.