Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

One Diffusion Step to Real-World Super-Resolution via Flow Trajectory Distillation

Authors: Jianze Li, Jiezhang Cao, Yong Guo, Wenbo Li, Yulun Zhang

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments demonstrate that our method outperforms existing one-step diffusion-based Real-ISR methods. The code and model will be released at https://github. com/Jianze Li-114/Flux SR. 5. Experiments 5.1. Experimental Settings Training Datasets. 5.2. Comparison with State-of-the-Art Methods 5.3. Ablation Study
Researcher Affiliation Collaboration Jianze Li * 1 Shanghai Jiao Tong University Jiezhang Cao * 2 Harvard University Yong Guo 3 Huawei Consumer Business Group Wenbo Li 4 Huawei Noah s Ark Lab. Correspondence to: Yulun Zhang <EMAIL>.
Pseudocode Yes Algorithm 1 Flux SR Training Procedure
Open Source Code No The code and model will be released at https://github. com/Jianze Li-114/Flux SR.
Open Datasets Yes We evaluate our model on the synthetic dataset DIV2K-val (Agustsson & Timofte, 2017) and two real datasets: Real SR (Cai et al., 2019) and Real Set65 (Yue et al., 2024).
Dataset Splits Yes Test Datasets. We evaluate our model on the synthetic dataset DIV2K-val (Agustsson & Timofte, 2017) and two real datasets: Real SR (Cai et al., 2019) and Real Set65 (Yue et al., 2024). From DIV2K-val, we use the Real ESRGAN degradation pipeline to generate corresponding LR images. On the these datasets, we evaluate using full-size images to assess the model s performance in real-world scenarios.
Hardware Specification No The paper mentions hardware in the context of limitations for other methods, not for its own experimental setup: "For example, we find that even a server with 8 A800-80GB GPUs cannot satisfy the memory requirement of this distillation if we directly apply the popular one-step distillation method OSEDiff (Wu et al., 2024a) on top of Flux.1-dev (Labs, 2023)." No specific hardware details are provided for the authors' experiments.
Software Dependencies No The paper does not provide specific version numbers for any software libraries, programming languages, or tools used in the experiments.
Experiment Setup Yes In this section, we use Real SR as the test dataset. The training iterations are set to 30k. Other settings remain consistent with those mentioned in Sec. 5.1.