AdvPaint: Protecting Images from Inpainting Manipulation via Adversarial Attention Disruption

Authors: Joonsung Jeon, Woo Jae Kim, Suhyeon Ha, Sooel Son, Sung-eui Yoon

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate that ADVPAINT s perturbations are highly effective in disrupting the adversary s inpainting tasks, outperforming existing methods; ADVPAINT attains over a 100-point increase in FID and substantial decreases in precision. The code is available at https://github.com/Joonsung Jeon/Adv Paint.
Researcher Affiliation Academia Joonsung Jeon, Woo Jae Kim, Suhyeon Ha, Sooel Son & Sung-Eui Yoon Korea Advanced Institute of Science and Technology (KAIST) EMAIL
Pseudocode No The paper describes the methodology using prose and mathematical equations. It does not contain any clearly labeled pseudocode blocks or algorithms.
Open Source Code Yes The code is available at https://github.com/Joonsung Jeon/Adv Paint.
Open Datasets Yes Following prior studies (Salman et al., 2023; Liang et al., 2023; Xue et al., 2024), we collected 100 images from publicly available sources23, which were then cropped and resized to 512 512 resolutions. 2https://www.pexels.com/ 3https://unsplash.com/
Dataset Splits No The paper mentions collecting 100 images for evaluation and using 25 randomly selected images for robustness testing in Section 5.6. It also mentions generating 50 random prompts. However, it does not specify explicit training, validation, or test dataset splits in the traditional sense for model development, as the paper focuses on generating perturbations for existing models rather than training a new model from scratch.
Hardware Specification Yes All experiments were conducted using a single NVIDIA GeForce RTX 3090 GPU.
Software Dependencies No The paper mentions using 'SD inpainter' (Stable Diffusion inpainting model), 'Grounded SAM', 'Chat GPT', and 'Alex Net' as components or tools in their experiments. However, it does not provide specific version numbers for any programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or other software libraries that would be required to reproduce the experimental setup.
Experiment Setup Yes We applied Projected Gradient Descent (PGD) to optimize our perturbations exclusively at timestep T, over 250 iterations, starting with an initial step size of 0.03, which progressively decreased at each step. Importantly, we set η as 0.06 for all adversarial examples, including those from prior works, to enforce consistent levels of imperceptible perturbations. In computing adversarial perturbations, we enlarged the generated bounding box mbb to m by a factor of ρ = 1.2, separating the regions for two-stage optimization.