Boosting Image De-Raining via Central-Surrounding Synergistic Convolution

Authors: Long Peng, Yang Wang, Xin Di, PeizheXia , Xueyang Fu, Yang Cao, Zheng-Jun Zha

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations of twelve de-raining methods on nine public datasets demonstrate that our proposed SC can comprehensively improve the performance of twelve de-raining networks under various rainy conditions without changing the original network structure or introducing extra computational costs. To demonstrate the effectiveness of SC, we evaluate it on twelve de-raining methods in nine publicly available datasets. In Table 1, we report the PSNR/SSIM of ten de-raining baselines on six benchmarks.
Researcher Affiliation Academia University of Science and Technology of China EMAIL, EMAIL
Pseudocode No The paper describes the operations of Central-Surrounding Difference Convolution (CSD) and Central-Surrounding Addition Convolution (CSA) using mathematical formulas (Eq. 2, 3, 4, 6, 7, 8, 9, 10) and prose, but does not present them in a structured pseudocode or algorithm block.
Open Source Code No The source codes will be publicly available.
Open Datasets Yes We evaluate the effectiveness of our proposed method on nine public single-image de-raining datasets, including both synthetic and real datasets: Rain12(Li et al. 2016), Rain200H(Yang et al. 2017), Rain200L(Yang et al. 2017), Rain1200(Zhang and Patel 2018), Rain12600(Fu et al. 2017), Outdoor-Rain(Li, Cheong, and Tan 2019), JORDERR(Yang et al. 2017), ID-CGAN-R(Zhang, Sindagi, and Patel 2019) and SIRR-R(Wei et al. 2019).
Dataset Splits Yes Following previous works (Li, Cheong, and Tan 2019; Yi et al. 2021; Chen et al. 2023b), we use reference metrics of PSNR and SSIM to evaluate the performance with ground truth. For real datasets without ground truth, we use non-reference metrics to evaluate. Referring to previous works (Yi et al. 2021; Chen et al. 2024), we use four kinds of non-reference metrics, including the NIQE, BRISQUE, PIQE, and PI.
Hardware Specification Yes In all experiments, we keep the training settings (e.g., model framework, loss function, and active function) the same as the original official public code, except that the VC is replaced by the SC on eight NVIDIA RTX3090 GPUs at Pytorch.
Software Dependencies No In all experiments, we keep the training settings (e.g., model framework, loss function, and active function) the same as the original official public code, except that the VC is replaced by the SC on eight NVIDIA RTX3090 GPUs at Pytorch.
Experiment Setup No In all experiments, we keep the training settings (e.g., model framework, loss function, and active function) the same as the original official public code, except that the VC is replaced by the SC on eight NVIDIA RTX3090 GPUs at Pytorch.