Dehaze-RetinexGAN: Real-World Image Dehazing via Retinex-based Generative Adversarial Network

Authors: Xinran Wang, Guang Yang, Tian Ye, Yun Liu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several real-world datasets demonstrate that our proposed framework performs favorably over state-of-the-art dehazing methods in visual quality and quantitative evaluation.
Researcher Affiliation Academia 1College of Artificial Intelligence, Southwest University, Chongqing, China 2The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China 3College of Computing and Data Science, Nanyang Technological University, Singapore, Singapore
Pseudocode No The paper describes methods and architectures in text and figures, but does not include a distinct pseudocode or algorithm block.
Open Source Code No The paper does not contain any explicit statements or links indicating the release of source code for the described methodology.
Open Datasets Yes Datasets. In the first stage of our Dehaze-Retinex GAN, we randomly select 4000 images from the URHI (Unannotated Real-world Hazy Images) dataset (Li et al. 2019) for training. For the fine-tuning stage, we use the above 4000 images and the overall SOTS (Synthetic Objective Testing Set) dataset (Li et al. 2019) for unpaired training. For testing, we employ four real-world datasets, namely RTTS (Realworld Task-driven Testing Set) dataset (4322 real-world hazy images) (Li et al. 2019), Fattal dataset (31 classic realworld hazy images) (Fattal 2014), HSTS (Hybrid Subjective Testing Set) (10 real-world hazy images) (Li et al. 2019), and URHI test dataset (the remaining 809 real-world hazy images), for quantitative and qualitative comparisons.
Dataset Splits Yes Datasets. In the first stage of our Dehaze-Retinex GAN, we randomly select 4000 images from the URHI (Unannotated Real-world Hazy Images) dataset (Li et al. 2019) for training. For the fine-tuning stage, we use the above 4000 images and the overall SOTS (Synthetic Objective Testing Set) dataset (Li et al. 2019) for unpaired training. To maintain the performance of our Dehaze-Retinex GAN, the training for the two stages is conducted separately. For testing, we employ four real-world datasets, namely RTTS (Realworld Task-driven Testing Set) dataset (4322 real-world hazy images) (Li et al. 2019), Fattal dataset (31 classic realworld hazy images) (Fattal 2014), HSTS (Hybrid Subjective Testing Set) (10 real-world hazy images) (Li et al. 2019), and URHI test dataset (the remaining 809 real-world hazy images), for quantitative and qualitative comparisons.
Hardware Specification Yes Our Dehaze-Retinex GAN is implemented using Py Torch (Paszke et al. 2019) and trained on a single NVIDIA RTX 3090 GPU (24GB) with a batch size of 36.
Software Dependencies No The paper mentions "Py Torch" but does not specify a version number. Other mentioned tools like "Adam optimizer" and "cosine annealing algorithm" are techniques or algorithms, not software dependencies with versions.
Experiment Setup Yes Our Dehaze-Retinex GAN is implemented using Py Torch (Paszke et al. 2019) and trained on a single NVIDIA RTX 3090 GPU (24GB) with a batch size of 36. All the input images are resized to 256 256. We utilize the Adam optimizer (Kingma and Ba 2014) with initial momentum β1 = 0.9 and β2 = 0.999. The initial learning rate is 1 10 4. The cosine annealing algorithm is employed to progressively reduce the learning rate. The hyperparameters λSSIM, λcon, λexp, λGAN, λL and λR are set to 4.0, 2.0, 0.003, 0.5, 0.2 and 0.2, respectively.