RefDeblur: Blind Motion Deblurring with Self-Generated Reference Image
Authors: Insoo Kim, Geonseok Seo, Hyong-Euk Lee, Jinwoo Shin
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been conducted to demonstrate the superiority of the proposed method under various datasets such as Real Blur (Rim et al., 2020), Go Pro (Nah et al., 2017), and RSBlur (Rim et al., 2022) datasets. |
| Researcher Affiliation | Collaboration | Insoo Kim EMAIL AI Center, Samsung Electronics Korea Advanced Institute of Science & Technology (KAIST) Geonseok Seo EMAIL AI Center, Samsung Electronics Seoul National University (SNU) Hyong-Euk Lee EMAIL AI Center, Samsung Electronics Jinwoo Shin EMAIL Korea Advanced Institute of Science & Technology (KAIST) |
| Pseudocode | No | The paper describes the method using mathematical equations and network architecture diagrams (Figure 3), but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We use Go Pro (Nah et al., 2017), Real Blur (Rim et al., 2020), and RSBlur (Rim et al., 2022) for training and test sets. Go Pro has been widely used for motion deblurring tasks, but it is a synthetic dataset where the blur image is generated by averaging sharp video frames captured by a high-speed camera. On the other hand, Real Blur and RSBlur are realistic motion blur datasets where they capture blur and sharp images in the same scene by using the beam splitter. |
| Dataset Splits | Yes | Go Pro contains 2,103 and 1,111 blur-sharp image pairs for training and test sets, respectively. ... Each Real Blur type comprises 3,758 and 980 image pairs for training and test sets, respectively. RSBlur dataset contains 8,878 and 3,360 blur-sharp image pairs for training and test sets, respectively. |
| Hardware Specification | No | The paper mentions using a 'Samsung Galaxy Note 20 Ultra' for capturing real-world images, but it does not specify any hardware details (e.g., GPU models, CPU types) used for running the experiments or training the models. |
| Software Dependencies | No | The paper mentions optimizing models using the Adam W and Adam algorithms, but it does not specify any software libraries or frameworks (e.g., PyTorch, TensorFlow, CUDA) with their version numbers that were used for implementation. |
| Experiment Setup | Yes | We train Ref Deblur-16 up to 1,000 epochs (batch size 16) for Real Blur, our Ref Deblur variants (T / S / B) up to 2,000 epochs (batch size 32) for Real Blur, our Ref Deblur variants (S / B) up to 1,000 epochs (batch size 32) for RSBlur and our Ref Deblur-B up to 12,000 epochs (batch size 64) for Go Pro. Our kernel-free model is optimized by the Adam W (Loshchilov & Hutter, 2019) algorithm (β1 = 0.9, β2 = 0.9 and weight decay 1e-3) with the cosine annealing schedule (1e-3 to 1e-7) gradually reduced for total iterations of each dataset. We optimize our kernel-based model using the Adam algorithm (β1 = 0.9, β2 = 0.9 and weight decay 0) with the step schedule (initial learning rate 1e-4) by decreasing the factor of 0.5 every 200 epochs for Real Blur and RSBlur dataset (up to 1,000 epochs), and every 600 epochs for Go Pro dataset (up to 3,000 epochs). We employ the hyperparameter of ours as λ = 0.01 and use Real Blur-J for all ablation studies unless otherwise specified. |