NightHaze: Nighttime Image Dehazing via Self-Prior Learning
Authors: Beibei Lin, Yeying Jin, Yan Wending, Wei Ye, Yuan Yuan, Robby T. Tan
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our Night Haze achieves state-of-the-art performance, outperforming existing nighttime image dehazing methods by a substantial margin of 15.5% for MUSIQ and 23.5% for Clip IQA. Table 1: Quantitative comparison on the Real Night Haze dataset. |
| Researcher Affiliation | Collaboration | 1National University of Singapore 2Huawei International Pte Ltd EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology in prose, including equations and detailed explanations of components like Light Map, Blending Weight Map, Noise, and the self-refinement process. However, it does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not explicitly provide a statement about releasing source code, nor does it include any links to a code repository in the main text or references. |
| Open Datasets | No | Our real-world nighttime haze datasets, Real Night Haze, is collected from internet and existing nighttime dehazing methods (Zhang et al. 2020; Jin et al. 2023), and consists of 440 real-world nighttime haze images. |
| Dataset Splits | No | The paper mentions a "Real Night Haze" dataset consisting of 440 images and refers to "validation sets" in a figure caption. However, it does not provide specific details on how this dataset is split into training, validation, or test sets (e.g., percentages, absolute counts, or predefined standard splits). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper mentions using "Adam as our optimizer" and adopting "the MAE network (He et al. 2022) as our backbone," but it does not specify any software libraries or frameworks with their version numbers (e.g., PyTorch 1.9, TensorFlow 2.x, Python 3.8). |
| Experiment Setup | Yes | In our self-prior learning, the size of input images is 224 224. During the training stage, we use Adam as our optimizer and set the initial learning rate to 1.5e-4. The total training steps and the training batch size are set to 20,000 and 128, respectively. Tlow and Thigh are set to 0.001 and 0.1, respectively. We set the number of the selected regions to 8. The size of each region is 64 64 and the range of the changed value is a uniform distribution (0, 0.04). We set Wn to 0.1. For a haze image of size 256 256, we first sample overlapping regions within the input haze image. The image size of the overlapping regions is 224 224 and the stride of the overlap sampling is 4. We use Adam as our optimizer, with the initial learning rate set to 2e-5. We set the training step and the training batch size to 10,000 and 16, respectively. The EMA weight is set to 0.9999. The thresholds v1 thr and v2 thr are set to 0.005 and 0. |