Debiased All-in-one Image Restoration with Task Uncertainty Regularization

Authors: Gang Wu, Junjun Jiang, Yijun Wang, Kui Jiang, Xianming Liu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments in diverse all-in-one restoration settings demonstrate the superiority and generalization of our approach. For example, Air Net retrained with TUR achieves average improvements of 1.16 d B on three distinct tasks and 1.81 d B on five distinct allin-one tasks. We provide detailed evaluation results across diverse all-in-one image restoration settings, demonstrating the superior performance obtained by our approach.
Researcher Affiliation Academia Faculty of Computing, Harbin Institute of Technology, Harbin 150001, China EMAIL, EMAIL
Pseudocode No The paper describes the proposed method using mathematical formulations and descriptive text, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No For more recent achievements in the field of allin-one image restoration, please refer to our survey paper (Jiang et al. 2024a) and continuously updated project1. 1https://github.com/Harbinzzy/All-in-One-Image-Restoration Survey. The paper does not explicitly state that the code for the proposed Task Uncertainty Regularization (TUR) methodology is open-source or provide a direct link to its implementation.
Open Datasets Yes Setting 1: Seven Degradation Tasks: Following the setup proposed in (Kong, Dong, and Zhang 2024), we take the DIV2K and Flickr2K as the training dataset... Setting 2: Rain-Haze-Noise: ...we combine the BSD400 (Arbel aez et al. 2011) and WED (Ma et al. 2017) datasets for training. ...The Rain100L (Yang et al. 2017) dataset is used for the deraining task, and the SOTS (Li et al. 2019) dataset is employed for dehazing. Setting 3: Rain-Haze-Noise-Blur-Dark: ...which introduces Go Pro (Nah, Kim, and Lee 2017) for deblurring, and LOL (Wei et al. 2018) for low-light enhancement... Setting 4: Synthetic and Real-World Deweathering: ...training data, termed All-Weather, includes images from Snow100K (Liu et al. 2018), Raindrop (Qian et al. 2018), and Outdoor-Rain (Li, Cheong, and Tan 2019) datasets.
Dataset Splits Yes Setting 1: Seven Degradation Tasks: Following the setup proposed in (Kong, Dong, and Zhang 2024), we take the DIV2K and Flickr2K as the training dataset, and evaluate our method on seven distinct degradation tasks... Setting 2: Rain-Haze-Noise: ...we combine the BSD400 (Arbel aez et al. 2011) and WED (Ma et 2017) datasets for training. Noisy images are generated with Gaussian noise at three levels: σ = 15, 25, 50, and testing is performed on the BSD68 dataset. The Rain100L (Yang et al. 2017) dataset is used for the deraining task, and the SOTS (Li et al. 2019) dataset is employed for dehazing.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not provide specific details about ancillary software dependencies, such as programming languages, libraries, or frameworks with their version numbers, that would be needed to replicate the experiments.
Experiment Setup No The paper describes the datasets, tasks, and baseline models used for experiments, and outlines the integration of the Uncertainty Estimation Module. However, it does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings, which are crucial for replicating the experimental setup.