Manifold Induced Biases for Zero-shot and Few-shot Detection of Generated Images

Authors: Jonathan Brokman, Amit Giloni, Omer Hofman, Roman Vainshtein, Hisashi Kojima, Guy Gilboa

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results across 20 generative models demonstrate that our method outperforms current approaches in both zero-shot and few-shot settings. This work advances the theoretical understanding and practical usage of generated content biases through the lens of manifold analysis.
Researcher Affiliation Collaboration Technion Israel Institute of Technology Fujitsu Research of Europe Fujitsu Limited
Pseudocode No The paper describes mathematical derivations and a pipeline (Fig. 1) but does not include a formal pseudocode block or algorithm.
Open Source Code Yes To reproduce our results, see our official implementation2. 2https://github.com/Jonathan Brok/Manifold-Induced-Biases-for-Zero-shot-and-Few-shot-Detection-of-Generated-Images
Open Datasets Yes Our method is evaluated on a combination from three benchmark datasets featuring diverse generative techniques: CNNSpot Wang et al. (2020) comprises real and generated images... The Universal Fake Detect Ojha et al. (2023) dataset extends CNNSpot with generated images... The Gen Image Zhu et al. (2023) dataset features images produced by commercial generative tools...
Dataset Splits Yes In total, our aggregated dataset consists of 100K of real images and additional 100K images produced from 20 different generation techniques... To construct the calibration set for the zero-shot methods, we extracted 1,000 real samples from the datasets. For the test set, we selected 200,000 samples, ensuring a representative volume from each generation technique... In all Mo E experiments, additional 1K labeled samples where used to train the light-weight classifier these where randomly selected in an additional train-test split, implemented on the dataset initially used for zero-shot testing.
Hardware Specification Yes All of the experiments were conducted on the Ubuntu 20.04 Linux operating system, equipped with a Standard NC48ads A100 v4 configuration, featuring 4 virtual GPUs and 440 GB of memory.
Software Dependencies Yes The experimental code base was developed in Python 3.8.2, utilizing Py Torch 2.1.2 and the Num Py 1.26.3 package for computational tasks.
Experiment Setup Yes Criterion hyper-parameters were set as follows: 1) No. of spherical noises s was set to 64; 2) Perturbation strength α d = 1.28, determining B0 radii and 3) A small scalar δ = 10 8 was added to the criterion denominator to ensure it is strictly positive. ... We set a = b = c = 1, though tuning is possible.