IRBridge: Solving Image Restoration Bridge with Pre-trained Generative Diffusion Models
Authors: Hanting Wang, Tao Jin, Wang Lin, Shulei Wang, Hai Huang, Shengpeng Ji, Zhou Zhao
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on six image restoration tasks demonstrate that IRBridge efficiently integrates generative priors, resulting in improved robustness and generalization performance. ... 4. Experiments ... Table 1 present the quantitative comparison results of our IRBridge with other methods on the aforementioned tasks. We employ PSNR, SSIM, LPIPS, and FID as evaluation metrics. |
| Researcher Affiliation | Academia | 1Zhejiang University. Correspondence to: Zhou Zhao <EMAIL>. |
| Pseudocode | No | The paper describes methods and equations (e.g., Proposition 3.1) but does not include any clearly labeled pseudocode or algorithm blocks. The description of the framework is narrative. |
| Open Source Code | No | Code will be available at Git Hub. |
| Open Datasets | Yes | Image Deraining. ... we used the Rain100H dataset (Yang et al., 2019)... Image Dehazing. ... we used the RESIDE dataset (Li et al., 2019)... Image Desnowing. ... we used the Snow100K dataset (Liu et al., 2018)... Image Raindrop Removal. ... using the Rain Drop dataset (Qian et al., 2018)... Low light enhancement. ... using the LOL dataset (Wei et al., 2018)... Image Inpainting. ... on the Celeb A-HQ (Liu et al., 2015) 256 256 dataset. |
| Dataset Splits | Yes | Rain100H dataset ... 1,800 paired images for training and 100 for testing. ... RESIDE dataset ... trained on the Outdoor Training Set (OTS) subset, containing 72,135 images, and tested on the Synthetic Objective Testing Set (SOTS) subset, which includes 500 images. ... Snow100K dataset ... 50K images are used for training and 50K for testing. ... Rain Drop dataset ... 861 training images and 58 testing samples. ... LOL dataset ... 485 paired images for training and 15 for testing. ... Celeb A-HQ ... trained the model using 20,000 images with randomly generated brush masks. |
| Hardware Specification | Yes | The model is trained on an Nvidia RTX 3090 GPU with a batch size of 12. |
| Software Dependencies | No | The paper mentions software components like "Adam W optimizer", "mixed-precision training", and "Control Net", but it does not specify version numbers for any libraries, frameworks, or programming languages (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | We use the Adam W optimizer with a learning rate of 5.0 10 5 and train for a total of 10k steps. The Adam optimizer is used with β1 = 0.9 and β2 = 0.999 to maintain a balance between the momentum term and the variance estimate. A weight decay of 1.0 10 2 is applied for regularization, while a small epsilon value of 1.0 10 8 ensures numerical stability. ... A constant schedule is employed with 500 warmup steps to gradually ramp up the learning rate at the start of training. |