QuARF: Quality-Adaptive Receptive Fields for Degraded Image Perception
Authors: Fei Gao, Ying Zhou, Ziyun Li, Wenwang Han, Jiaqi Shi, Maoying Qiao, Jinlan Xu, Nannan Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Thorough experimental results show that QUARF significantly and robustly improves the performance for degraded images, and outperforms data augmentation in most cases. The paper includes a dedicated "4 Experiments" section with subsections such as "4.1 Image Translation", "4.2 Image Restoration", "4.3 Face Parsing", "4.4 Semantic Segmentation", and "4.5 Ablation Study", all featuring detailed performance tables and figures. |
| Researcher Affiliation | Academia | 1 Xidian University, Xi an 710126, China 2 Hangzhou Dianzi University, Hangzhou 310018, China 3 KTH Royal Institute of Technology, Stockholm 100 44, Sweden 4 The University of Technology, Sydney, NSW 2007, Australia. All listed affiliations are academic institutions. |
| Pseudocode | No | The paper describes the methodology using narrative text, equations, and diagrams (e.g., Figure 4), but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/Ai Art-Gao/Qu ARF and Our code have been released online. |
| Open Datasets | Yes | The paper references numerous publicly available datasets with citations, including: CUFS dataset (Wang and Tang 2008), summer2winter dataset (Zhu et al. 2017), Metface (Karras et al. 2020) dataset, COCO dataset and the Anime Colorization dataset (Chan, Durand, and Isola 2022), MSCOCO and Wiki Art datasets (Lin et al. 2014; Phillips and Mackintosh 2011), DF2K (Lim et al. 2017) and OST300 (Wang et al. 2018) datasets, Urban100, BSD100, Manga109 and AGAN-Data, Celeb AMask-HQ (Lee et al. 2020), HQSeg-44K dataset (Ke et al. 2024), Thin Object5K (Liew et al. 2021)(test set), DIS (Qin et al. 2022) (validation set), HR-SOD (Zeng et al. 2019) and COIFT (Liew et al. 2021). |
| Dataset Splits | Yes | For Single-Image Super Resolution (SISR)...The dataset is split 7:3 into training and testing sets, with the test set referred to as DF1K. For other tasks, it mentions: We test the model performance on Thin Object5K (Liew et al. 2021)(test set), DIS (Qin et al. 2022) (validation set), HR-SOD (Zeng et al. 2019) and COIFT (Liew et al. 2021). The original clear images as well as their degraded versions (5 different levels of hybrid distortion following Code Former (Zhou et al. 2022a)), are used for training and testing accordingly. |
| Hardware Specification | No | The paper discusses the methodology and experimental results across various tasks and models, but it does not specify any particular hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not explicitly mention specific software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow, CUDA) used for implementing and running the described methodology. |
| Experiment Setup | No | The paper states: 'In each task, we select an advanced DNN-based method as the baseline model, and then apply QUARF to the network... train the original model exactly following the official settings or using the officially released model (Official)'. However, it does not explicitly provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) within the main text. |