Inverse Flow and Consistency Models

Authors: Yuchen Zhang, Jian Zhou

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of IF on synthetic and real datasets, outperforming prior approaches while enabling noise distributions that previous methods cannot support. Finally, we showcase applications of our techniques to fluorescence microscopy and single-cell genomics data, highlighting IF s utility in scientific problems.
Researcher Affiliation Academia 1Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, USA. Correspondence to: Jian Zhou <EMAIL>.
Pseudocode Yes Algorithm 1 IFM Training 1: Input: dataset D, initial model parameter θ, and learning rate η 2: repeat 3: Sample x1 D and t U[0, 1] 4: x0 stopgrad ODEvθ 1 0(x1) 5: Sample xt p(xt | x0) 6: L(θ) vθ t (xt) ut (xt | x0) 7: θ θ η θL(θ) 8: until convergence
Open Source Code Yes 1Code available at https://github.com/jzhoulab/ Inverse Flow
Open Datasets Yes We evaluated the proposed method on images in the benchmark dataset BSDS500 (Arbeláez et al., 2011), Kodak, and Set12 (Zhang et al., 2017). ... The Fluorescence Microscopy Denoising (FMD) dataset published by (Zhang et al., 2019) was downloaded from https://github.com/yinhaoz/denoising-fluorescence. ... The adult mouse brain dataset published by (Zeisel et al., 2018) was downloaded from https://www.ncbi.nlm.nih.gov/sra/SRP135960. The dentate gyrus neurogenesis dataset published by (Hochgerner et al., 2018a) was downloaded from https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE104323
Dataset Splits Yes All models were trained using the BSDS500 training set and evaluated on the BSDS500 test set, Kodak, and Set12.
Hardware Specification Yes All experiments were conducted on a server with 36 cores, 400 GB memory, and NVIDIA Tesla V100 GPUs.
Software Dependencies Yes All models were implemented with Py Torch 2.1 (Paszke et al., 2019) and trained with the Adam W (Loshchilov & Hutter, 2019) optimizer.
Experiment Setup Yes To train IFM or ICM, we first consider a discretized time sequence ϵ = t1 < t2 < ... < t N = 1, where ϵ is a small positive value close to 0. We follow (Karras et al., 2022) to determine the time sequence with the formula ti = N 1(T 1/ρ ϵ1/ρ)ρ , where ρ = 7, T = 1, and N = 11. ... For ICM, the loss is weighted by λ(i) = ti+1 ti (50) in the same way as (Song & Dhariwal, 2023). ... Table 2. Model architectures and hyperparameters dataset architecture channels embed_dim embed_scale epochs lr lr schedule Navier-Stokes MLP [256,256, 256,256] 256 1.0 2000 5 10 4 None