Intra and Inter Parser-Prompted Transformers for Effective Image Restoration
Authors: Cong Wang, Jinshan Pan, Liyan Wang, Wei Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that PPTformer achieves state-of-the-art performance on image deraining, defocus deblurring, desnowing, and low-light enhancement. |
| Researcher Affiliation | Academia | 1Shenzhen Campus of Sun Yat-sen University, China 2Centre for Advances in Reliability and Safety, Hong Kong 3The Hong Kong Polytechnic University, Hong Kong 4Nanjing University of Science and Technology, China 5Dalian University of Technology, China |
| Pseudocode | No | The paper describes mathematical formulations for its components (e.g., Intra PPA, Inter PPA, PPFN) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/supersupercong/pptformer |
| Open Datasets | Yes | We evaluate PPTformer on benchmarks for 4 image restoration tasks: (a) image deraining, (b) single-image defocus deblurring, (c) image desnowing, and (d) low-light image enhancement. ... Test100 (Zhang, Sindagi, and Patel 2019), Rain100H (Yang et al. 2017b), and Test2800 (Fu et al. 2017a). ... DPDD dataset (Abuolaim and Brown 2020). ... CSD (Chen et al. 2021), SRRS (Chen et al. 2020), and Snow100K (Liu et al. 2018) datasets. ... LOL (Wei et al. 2018) and LOL-v2 (Yang et al. 2020). |
| Dataset Splits | No | The paper uses standard benchmark datasets such as Test100, Rain100H, Test2800, DPDD, CSD, SRRS, Snow100K, LOL, and LOL-v2, and refers to their test sets. However, it does not explicitly provide specific details like percentages or sample counts for training, validation, and testing splits, nor does it cite the exact split methodologies for these datasets within the provided text. |
| Hardware Specification | No | The paper mentions training models and conducting experiments but does not specify any hardware details such as GPU models, CPU types, or other computer specifications used. |
| Software Dependencies | No | The paper mentions using the AdamW optimizer (Loshchilov and Hutter 2019) and SAM (Kirillov et al. 2023) to generate parsers, but it does not specify any software libraries or programming languages with their version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We train PPTformer using the AdamW optimizer (Loshchilov and Hutter 2019) with the initial learning rate 5e-4 that is gradually reduced to 1e-7 with the cosine annealing (Loshchilov and Hutter 2017). The training patch size is set as 256x256 pixels. ... To constrain the training of PPTformer, we use the same loss function (Kong et al. 2023) with default parameters. |