DIIN: Diffusion Iterative Implicit Networks for Arbitrary-scale Super-resolution

Authors: Tao Dai, Song Wang, Hang Guo, Jianping Wang, Zexuan Zhu

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on public datasets demonstrate that our method achieves state-of-the-art or competitive performance, highlighting its effectiveness and efficiency for arbitrary-scale SR. Our code is available at https://github.com/Song-1205/DIIN.
Researcher Affiliation Academia 1College of Computer Science and Software Engineering, Shenzhen University 2Tsinghua Shenzhen International Graduate School, Tsinghua University 3School of Artificial Intelligence, Shenzhen University 4National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the method using mathematical equations (e.g., Equation 1, 2, 5, 8, 12, 13, 14, 15, 16, 17) and textual descriptions of network components and block architectures (e.g., Figure 2), but does not contain a structured pseudocode or algorithm block.
Open Source Code Yes Our code is available at https://github.com/Song-1205/DIIN.
Open Datasets Yes Datasets. Similar to [Chen et al., 2021; Wei and Zhang, 2023], we use 800 high-quality images with a 2K resolution from the DIV2K [Agustsson and Timofte, 2017] dataset as training data. During testing, the model is evaluated on the DIV2K validation set and several commonly used benchmark datasets, including Set5 [Bevilacqua et al., 2012], Set14 [Zeyde et al., 2010], B100 [Martin et al., 2001] and Urban100 [Huang et al., 2015].
Dataset Splits Yes Datasets. Similar to [Chen et al., 2021; Wei and Zhang, 2023], we use 800 high-quality images with a 2K resolution from the DIV2K [Agustsson and Timofte, 2017] dataset as training data. During testing, the model is evaluated on the DIV2K validation set and several commonly used benchmark datasets... Specifically, we crop 128s 128s patches as ground truth (GT), where s is a scaling factor sampled from the uniform distribution U(1, 4).
Hardware Specification Yes All experiments are implemented in Py Torch [Paszke et al., 2019] and executed on four NVIDIA RTX 3090 GPUs.
Software Dependencies No The paper states, "All experiments are implemented in Py Torch [Paszke et al., 2019] and executed on four NVIDIA RTX 3090 GPUs." While PyTorch is mentioned, a specific version number for PyTorch itself, or any other software dependencies, is not provided.
Experiment Setup Yes We train all models with the Adam optimizer [Kingma and Ba, 2015], starting from an initial learning rate of 4 10 5 and minimizing the L1 loss for 1500 epochs using a batch size of 32. The learning rate is updated by a cosine-annealing schedule every 50 epochs.