Linear combinations of latents in generative models: subspaces and beyond

Authors: Erik Bodin, Alexandru Stere, Dragos Margineantu, Carl Ek, Henry Moss

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now assess our proposed transformation scheme LOL experimentally. To verify that LOL matches or exceeds current methods for Gaussian latents for currently available operations interpolation and centroid determination we perform qualitative and quantitative comparisons to their respective baselines. We then demonstrate new capabilities with several examples of low-dimensional subspaces on popular diffusion models and a popular flow matching model.
Researcher Affiliation Collaboration 1University of Cambridge 2Lancaster University 3Karolinska Institutet 4Boeing Commercial Airplanes 5Boeing AI
Pseudocode No The paper describes methods using mathematical equations and textual descriptions but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Implementation and examples: https://github.com/bodin-e/linear-combinations-of-latents
Open Datasets Yes We closely follow the evaluation protocol in Samuel et al. (2023), basing the experiments on Stable Diffusion (SD) 2.1 (Rombach et al., 2022) and inversions of random images from 50 random classes from Image Net1k (Deng et al., 2009)
Dataset Splits No The paper describes how images are selected from ImageNet1k for evaluation purposes (e.g., 50 random classes, 50 unique images per class, paired for interpolation, grouped for centroids), but it does not specify traditional train/test/validation dataset splits for training a model, as the methods are evaluated on pre-trained generative models.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or specific computing environments used for running its experiments.
Software Dependencies No The paper mentions several tools and models like 'pytorch-fid library (Seitzer, 2020)' and 'Max Vi T image classification model (Tu et al., 2022)', but does not provide specific version numbers for these or other software dependencies such as Python, PyTorch, or CUDA.
Experiment Setup Yes We used the guidance scale 1.0 (i.e. no guidance) for the inversion, which was then matched during generation. We used a sufficient number of steps for the inversion (we used 400 steps), which we then matched for the generation. For the NAO method, ... we used 11 interpolation points and selected three points from these at uniform (index) distance. For the classification accuracy, we used a pre-trained classifier, the Max Vi T image classification model (Tu et al., 2022)