Fooling Contrastive Language-Image Pre-Trained Models with CLIPMasterPrints
Authors: Matthias Freiberger, Peter Kun, Christian Igel, Anders Sundnes Løvlie, Sebastian Risi
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate how fooling master images for CLIP (CLIPMaster Prints) can be mined using stochastic gradient descent, projected gradient descent, or blackbox optimization. We investigate the properties of the mined images, and find that images trained on a small number of image captions generalize to a much larger number of semantically related captions. We evaluate possible mitigation strategies, where we increase the robustness of the model and introduce an approach to automatically detect CLIPMaster Prints to sanitize the input of vulnerable models. |
| Researcher Affiliation | Academia | Matthias Freiberger EMAIL Copenhagen University Peter Kun EMAIL IT University of Copenhagen Christian Igel EMAIL Copenhagen University Anders Sundnes Løvlie EMAIL IT University of Copenhagen Sebastian Risi EMAIL IT University of Copenhagen |
| Pseudocode | Yes | A.5 Pseudocode for black-box mining of CLIPMaster Prints Algorithm 1 illustrates our black-box approach to mining CLIPMaster Prints as pseudocode listing. |
| Open Source Code | Yes | Code available at https://github.com/matfrei/CLIPMaster Prints. We supply our code with instructions on how to reproduce our experiments as supplementary material. The code is also available at https://github.com/matfrei/CLIPMaster Prints. |
| Open Datasets | Yes | We test our approach to finding master images for both fooling CLIP on famous artworks and on Image Net (Russakovsky et al., 2015) classes. |
| Dataset Splits | Yes | We create train validation and test sets of 60000, 10000 and 10000 images respectively, each from a subset of the Image Net train set. ... in all subsets (train, validation and test) 50% of all images are mined CLIPMaster Prints, while remaining images are the templates used to initialize the mining process, i.e. randomly chosen images from the Imagenet train and validation sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It mentions using different CLIP models but not the hardware they were run on. |
| Software Dependencies | No | The paper mentions optimizers like Adam (Kingma & Ba, 2015) and algorithms like CMA-ES (Hansen & Ostermeier, 2001) but does not provide specific version numbers for any software libraries, programming languages, or tools used in the experiments. |
| Experiment Setup | Yes | SGD is applied to a single randomly initialized image and optimized for 1000 iterations using Adam (Kingma & Ba, 2015) (β1 = 0.9, β2 = 0.999, ϵ = 10 8) at a learning rate of 0.1. ... In our black-box approach, we search the latent space of the stable diffusion VAE (Rombach et al., 2022) for CLIPMaster Prints using CMA-ES for 18000 iterations. ... We initialize CMA-ES with a random vector sampled from a zero-mean unit-variance Gaussian distribution and choose σ = 1 as initial sampling variance. We follow the heuristic suggested by Hansen (2016) and sample 4 + 3 log(d) = 4 + 3 log(7056) 31 candidates per iteration. ... Finally, for our PGD approach, we start from an existing image and again optimize for 1000 iterations using a stepsize of α = 1 and a maximal adversarial perturbation of ϵ = 15. |