Inverse Problem Sampling in Latent Space Using Sequential Monte Carlo

Authors: Idan Achituve, Hai Victor Habi, Amir Rosenfeld, Arnon Netzer, Idit Diamant, Ethan Fetaya

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluations on Image Net and FFHQ show the benefits of LDSMC over competing methods in various inverse problem tasks and especially in challenging inpainting tasks.
Researcher Affiliation Collaboration 1 Sony Semiconductor Israel (SSI), Israel 2Faculty of Engineering, Bar-Ilan University, Israel. Correspondence to: Idan Achituve <EMAIL>.
Pseudocode Yes Algorithm 1 LD-SMC
Open Source Code No The paper does not contain an explicit statement about releasing its source code or a link to a code repository.
Open Datasets Yes We evaluated LD-SMC on Image Net (Russakovsky et al., 2015) and FFHQ (Karras et al., 2019); both are common in the literature of inverse problems
Dataset Splits Yes We sampled 1024 random images from the validation set of each dataset which were used to evaluate all methods.
Hardware Specification Yes The experiments were carried out mainly using an NVIDIA A100 having 40GB and 80GB memory.
Software Dependencies No The paper mentions using specific models and samplers like DDIM, VQ-4 / CIN256-V2, but does not provide specific version numbers for software dependencies (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes The guidance scale was fixed to 1.0 in all our experiments. For all methods, we performed a hyperparameter search on η {0.05, 0.5, 1.0} and found that LD-SMC worked best with η = 1.0. For our method, we also performed a grid search over κ2 {0.5, 1.5, 2.5}, s {0, 100, 200, 333}, and ρ {0.5, 0.75}. Table 3: LD-SMC hyperparameters for all tasks.