Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution

Authors: Simiao Li, Yun Zhang, Wei Li, Hanting Chen, Wenjia Wang, Bingyi Jing, Shaohui Lin, Jie Hu

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments illustrate the significance of the proposed Mi PKD technique. 4 EXPERIMENTAL RESULTS 4.2 RESULTS AND COMPARISON 5 ABLATION STUDY The peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) are computed to evaluate the quality of the SR model’s output.
Researcher Affiliation Collaboration Simiao Li Huawei Noah s Ark Lab Yun Zhang The Hong Kong University of Science and Technology (Guangzhou) Wei Li , Hanting Chen Huawei Noah s Ark Lab Wenjia Wang The Hong Kong University of Science and Technology (Guangzhou) Bingyi Jing Southern University of Science and Technology Shaohui Lin East China Normal University Jie Hu Huawei Noah s Ark Lab EMAIL, EMAIL
Pseudocode No The paper describes the methodology using prose and mathematical equations. There are no explicit pseudocode or algorithm blocks presented in a structured format.
Open Source Code No The proposed Mi PKD is implemented by the Basic SR (Wang et al., 2022b) and Py Torch (Paszke et al., 2019) framework and train them using 4 NVIDIA V100 GPUs. This statement indicates the use of existing frameworks but does not explicitly state that the code for the Mi PKD methodology itself is open-sourced or provided.
Open Datasets Yes We utilize DIV2K (Timofte et al., 2017) dataset for training, and evaluate models on various standard test sets: Set14 (Zeyde et al., 2012), Set5 (Bevilacqua et al., 2012), BSD100 (Martin et al., 2001), and Urban100 (Huang et al., 2015).
Dataset Splits No The paper mentions using DIV2K for training and evaluating on standard test sets (Set14, Set5, BSD100, Urban100). However, it does not provide specific details on how the DIV2K dataset was split for training, validation, or testing, nor does it specify exact percentages or sample counts for any splits. While standard test sets imply predefined splits, the training dataset's partitioning is not described.
Hardware Specification Yes The proposed Mi PKD is implemented by the Basic SR (Wang et al., 2022b) and Py Torch (Paszke et al., 2019) framework and train them using 4 NVIDIA V100 GPUs.
Software Dependencies No The Adam optimizer (Kingma & Ba, 2014) is employed for training models... The proposed Mi PKD is implemented by the Basic SR (Wang et al., 2022b) and Py Torch (Paszke et al., 2019) framework... The paper mentions software tools like Adam optimizer, Basic SR, and PyTorch but does not specify their version numbers, which are required for reproducibility.
Experiment Setup Yes The Adam optimizer (Kingma & Ba, 2014) is employed for training models, utilizing parameters β1 = 0.9, β2 = 0.99 and ϵ = 1e 8 with 2.5e5 iterations. The learning rate is initialized at 1e 4 and reduced by a factor of 10 at each 1e5 iteration. We set the loss weights λkd, λfeat and λblock to 1, 1 and 0.1, respectively.