Dynamic Contrastive Knowledge Distillation for Efficient Image Restoration

Authors: Yunshuai Zhou, Junbo Qiao, Jincheng Liao, Wei Li, Simiao Li, Jiao Xie, Yunhang Shen, Jie Hu, Shaohui Lin

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that DCKD significantly outperforms the state-of-the-art KD methods across various image restoration tasks and backbones.
Researcher Affiliation Collaboration 1East China Normal University, Shanghai, China 2Huawei Noah s Ark Lab, China 3Xiamen University, China 4Key Laboratory of Advanced Theory and Application in Statistics and Data Science MOE, China
Pseudocode No The paper describes the methodology in narrative text and mathematical formulations (e.g., equations 1-10) but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper provides a link to an arXiv preprint (1https://arxiv.org/abs/2412.08939) in the implementation details but does not contain an explicit statement about releasing source code or a direct link to a code repository for the described methodology.
Open Datasets Yes For image super-resolution, DCKD is trained using 800 images from DIV2K (Timofte et al. 2017) and evaluated on four benchmark datasets. For image deblurring, the models are trained and tested both on Go Pro dataset (Nah, Hyun Kim, and Mu Lee 2017b). For image deraining, we train DCKD on 13,712 clean-rainy image pairs collected from multiple datasets (Fu et al. 2017; Yang et al. 2017; Zhang, Sindagi, and Patel 2019; Li et al. 2016) and evaluate it on Test100 (Zhang, Sindagi, and Patel 2019), Rain100H (Yang et al. 2017), Rain100L (Yang et al. 2017), Test2800 (Fu et al. 2017), and Test1200 (Zhang and Patel 2018).
Dataset Splits Yes For image super-resolution, DCKD is trained using 800 images from DIV2K (Timofte et al. 2017) and evaluated on four benchmark datasets. For image deblurring, the models are trained and tested both on Go Pro dataset (Nah, Hyun Kim, and Mu Lee 2017b). For image deraining, we train DCKD on 13,712 clean-rainy image pairs collected from multiple datasets (Fu et al. 2017; Yang et al. 2017; Zhang, Sindagi, and Patel 2019; Li et al. 2016) and evaluate it on Test100 (Zhang, Sindagi, and Patel 2019), Rain100H (Yang et al. 2017), Rain100L (Yang et al. 2017), Test2800 (Fu et al. 2017), and Test1200 (Zhang and Patel 2018).
Hardware Specification Yes DCKD is implemented by Py Torch using 4 NVIDIA V100 GPUs.
Software Dependencies No The paper mentions 'Py Torch' as the implementation framework and 'ADAM optimizer (Kingma and Ba 2014)' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes All the models are trained using ADAM optimizer (Kingma and Ba 2014) with β1 = 0.9, β2 = 0.99, and ϵ = 10 8. The training batch size is set to 16 with a total of 2.5 105 iterations. The initial learning rate is set to 10 4 and is decayed by a factor of 10 at every 105 update.