Lightning-Fast Image Inversion and Editing for Text-to-Image Diffusion Models

Authors: Dvir Samuel, Barak Meiri, Haggai Maron, Yoad Tewel, Nir Darshan, Shai Avidan, Gal Chechik, Rami Ben-Ari

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct a comprehensive evaluation of GNRI. First, we directly assess the quality of inversions found with GNRI by measuring reconstruction errors compared to deterministic inversion approaches. Our method suppresses all methods with 2 to 40 speedup gain. We then demonstrate the benefit of GNRI in two downstream tasks (1) In Image editing, GNRI smoothly changes fine details in the image in a consistent and coherent way, whereas previous methods struggle to do so. (2) In Seed Interpolation and Rare concept generation (Samuel et al., 2023) that require diffusion inversion. In both of these tasks, GNRI yields more accurate seeds, resulting in superior generated images, both qualitatively and quantitatively.
Researcher Affiliation Collaboration Dvir Samuel1,3 Barak Meiri1,2 Haggai Maron4,5 Yoad Tewel2,5 Nir Darshan1 Shai Avidan2 Gal Chechik3,5 Rami Ben-Ari1 1Origin AI, 2Tel-Aviv University, 3Bar-Ilan University, 4Technion, 5NVIDIA Research
Pseudocode No The paper describes mathematical formulations and iterative schemes (e.g., Eq. 6, Eq. 10) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing the source code for the methodology described, nor does it provide a direct link to a code repository.
Open Datasets Yes We test the reconstruction of 5,000 image-caption pairs from COCO Lin et al. (2014), presenting three metrics in Fig. 3. Following (Wu et al., 2024) we also evaluated our approach on the newly introduced Pie Bench (Ju et al., 2023) dataset.
Dataset Splits Yes Specifically, we used the entire set of 5000 images from the MS-COCO-2017 validation dataset (Lin et al., 2014), along with their corresponding captions.
Hardware Specification Yes Our solution, Guided Newton-Raphson Inversion, inverts an image within 0.4 sec (on an A100 GPU) for few-step models (SDXL-Turbo and Flux.1), opening the door for interactive image editing. All methods were tested on a single A100 GPU for a fair comparison. We compared the memory usage of our approach, with AIDI Pan et al. (2023) and Exact DPM (Zhang et al. (2023)) on a A100 GPU (40 GB VRAM) and a RTX 3090Ti GPU (24GB VRAM).
Software Dependencies No The paper mentions "Py Torch s built-in gradient calculation was used for computing derivatives of Eq. (9)." This only mentions the tool without a specific version number.
Experiment Setup Yes Sampling steps were set to 50 for SD2.1 and 4 for SDXL-turbo and Flux.1. Here, λ > 0 is a hyperparameter weighting factor for the guidance term. See Appendix E for ablation of λ values. We observe that... setting λ = 0.1 achieves the highest reconstruction accuracy.