Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Diffusion Prior Interpolation for Flexibility Real-World Face Super-Resolution
Authors: Jiarui Yang, Tao Dai, Yufei Zhu, Naiqi Li, Jinmin Li, Shu-Tao Xia
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In extensive experiments conducted on synthetic and real datasets, along with consistency validation in face recognition, DPI demonstrates superiority over SOTA FSR methods. [...] Extensive experiments on both synthetic and real-world datasets demonstrate that our method outperforms SOTA FSR methods. |
| Researcher Affiliation | Academia | 1College of Artificial Intelligence, Nankai University, Tianjin, China 2Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China 3College of Computer Science and Software Engineering, Shenzhen University, China |
| Pseudocode | Yes | Algorithm 1: Diffusion Prior Interpolation, given a diffusion model (ยตฮธ( ), ฮฃฮธ( )) and Corrector CRT( ). |
| Open Source Code | No | The paper does not provide an explicit statement about releasing their own code, nor a link to a code repository for the methodology described. |
| Open Datasets | Yes | For evaluation, we utilize synthetic datasets FFHQ1000 and Celeb A1000 (Liu et al. 2015), along with real-world datasets LFW (Huang et al. 2008), Web Photo (Wang et al. 2021), and WIDER (Yang et al. 2016), serving as our testsets. |
| Dataset Splits | No | The paper mentions using specific datasets as "testsets" (e.g., FFHQ1000, Celeb A1000, LFW, Web Photo, WIDER) and that a pre-trained model was trained on FFHQ 49k. However, it does not provide specific training/validation splits or percentages for the experiments conducted with their proposed DPI method. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using a pre-trained DDPM from DPS and the Deep Face framework but does not specify version numbers for general software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | For each of these three scales, the parameters (ฯ, s, ฯ) is set to (100, 1.4, 500), (300, 1.2, 750), and (500, 1, 1000) respectively. For real-world datasets, we adhere to the experimental settings in Code Former (Zhou et al. 2022), with fixed parameters set to (500, 1, 1000). The sparsity parameter k for CMs is set to 2 for all experiments. |