Prior-guided Hierarchical Harmonization Network for Efficient Image Dehazing

Authors: Xiongfei Su, Siyuan Li, Yuning Cui, Miao Cao, Yulun Zhang, Zheng Chen, Zongliang Wu, Zedong Wang, Yuanlong Zhang, Xin Yuan

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments demonstrate that our model efficiently attains the highest level of performance among existing methods across four different datasets for image dehazing. ... In this section, we evaluate our proposed PGH2Net in four data sets for image dehazing tasks, including indoor synthetic data, outdoor synthetic data, and two real data. ... Ablation Studies
Researcher Affiliation Academia 1Zhejiang University, Hangzhou, China, e-mail: EMAIL 2Westlake University, Hangzhou, China 3Technical University of Munich, Munich, Germany 4Shanghai Jiao Tong University, Shanghai, China 5Tsinghua University, Beijing, China
Pseudocode No The paper describes the model architecture and its components (Spatial Aggregation Block, Prior Aggregation, Channel Harmonization Block, Sandwich Module, Histogram Equation Guided Module) using textual descriptions, mathematical equations, and diagrams. However, it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available. The sentence "Our future work will explore the framework in other image tasks." does not imply code release for the current work.
Open Datasets Yes We train and evaluate our models on synthetic and real-world datasets for image dehazing. Following (Wang et al. 2022b, 2024), we train separate models on the RESIDE-Indoor and RESIDE-Outdoor datasets (Li et al. 2018), and evaluate the resulting models on the corresponding test sets of RESIDE, i.e., SOTS-Indoor and SOTS-Outdoor, respectively. In addition, we adopt two real-world datasets, i.e., Dense-Haze (Ancuti et al. 2019) and O-HAZE (Ancuti et al. 2018), to verify the robustness of our model in more challenging real-world scenarios.
Dataset Splits Yes Following (Wang et al. 2022b, 2024), we train separate models on the RESIDE-Indoor and RESIDE-Outdoor datasets (Li et al. 2018), and evaluate the resulting models on the corresponding test sets of RESIDE, i.e., SOTS-Indoor and SOTS-Outdoor, respectively.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using Adam (Kingma and Ba 2014) as the optimizer and cosine annealing (Loshchilov and Hutter 2016) for the learning rate schedule. However, it does not specify versions for any programming languages, libraries (e.g., PyTorch, TensorFlow), or other software components.
Experiment Setup Yes The models are trained using Adam (Kingma and Ba 2014) with initial learning rate as 8e 4, which is gradually reduced to 1e 6 with cosine annealing (Loshchilov and Hutter 2016). For data augmentation, we adopt random horizontal flips with a probability of 0.5. Models are trained on 32 samples of size 256 256 for each iteration. ... Ltotal = Lspatial + λ1Lfrequency + λ2Lssim, where loss weight λ requires fine-tuning in practice and we used λ1 = 0.5 and λ2 = 1 as the final setting. ... The model is trained with an initial learning rate of 8e 4 and a batch size of 32, ending in epoch 100.