Accelerated Diffusion via High-Low Frequency Decomposition for Pan-Sharpening
Authors: Ge Meng, Jingjia Huang, Jingyan Tu, Yingying Wang, Yunlong Lin, Xiaotong Tu, Yue Huang, Xinghao Ding
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three different datasets demonstrate that our method outperforms existing approaches in both quantitative metrics, qualitative metrics, and inference efficiency. ... Table 1: Quantitative comparison of reference metrics. ... Ablation Studies |
| Researcher Affiliation | Academia | 1 Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China 2School of Informatics, Xiamen University, China 3Institute of Artificial Intelligence, Xiamen University, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using mathematical formulations and descriptive text, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Our experiments involve three satellite image datasets, namely World View-II, World View-III, and Gao Fen2, each comprising several hundred PAN and LRMS image pairs. The paired training samples are unavailable in practice. To create the training dataset, we employ the Wald protocol (Wald, Ranchin, and Mangolini 1997) for generating the necessary paired samples. |
| Dataset Splits | No | The paper mentions cropping PAN images into patches of 128x128 and LRMS patches of 32x32, and the use of 200 full-resolution real datasets for generalization. However, it does not specify explicit train/test/validation splits (e.g., percentages or exact counts) for the main datasets used in the quantitative experiments. |
| Hardware Specification | Yes | We implement our network on the PC with a single NVIDIA TITAN RTX 3090 GPU |
| Software Dependencies | No | We implement our network ... in Pytorch framework. No specific version number for Pytorch or any other library is provided. |
| Experiment Setup | Yes | The Adam optimizer is adopted for optimization. The initial learning rate is set to 1 10 4. The batch size is set to 8. During the forward and reverse processes, the time step T is set to 200 for the training phase, and the implicit sampling step S is set to 10 for both the training and inference phases. |