Wavelet-Assisted Multi-Frequency Attention Network for Pansharpening
Authors: Jie Huang, Rui Huang, Jinghao Xu, Siran Peng, Yule Duan, Liang-Jian Deng
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Quantitative and qualitative experiments on multiple datasets demonstrate that our method outperforms existing approaches and shows significant generalization capabilities for real-world scenarios. ... The effectiveness of these strategies has been validated and demonstrated through extensive ablation experiments. |
| Researcher Affiliation | Academia | 1University of Electronic Science and Technology of China 2Institute of Automation, Chinese Academy of Sciences 3School of Artificial Intelligence, University of Chinese Academy of Sciences EMAIL,EMAIL, EMAIL,EMAIL |
| Pseudocode | No | The paper describes the methodology using textual explanations, mathematical equations, and figures (Figure 3, Figure 4, Figure 5) to illustrate components and workflow, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Code https://github.com/Jie-1203/WFANet |
| Open Datasets | Yes | We obtain our datasets and data processing methods from the Pan Collection repository1 (Deng et al. 2022). 1https://github.com/liangjiandeng/Pan Collection |
| Dataset Splits | No | The paper mentions: 'Each training dataset consists of PAN, LRMS, and GT image pairs with sizes of 64 64, 16 16 8, and 64 64 8, respectively.' However, it does not specify the explicit percentages or counts for training, validation, and test splits of the overall datasets used. |
| Hardware Specification | Yes | We implement our network using the Py Torch framework on an RTX 4090D GPU. |
| Software Dependencies | No | The paper mentions using 'Py Torch framework', but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The learning rate is set to 9 10 4 and is halved every 90 epochs. The model is trained for 360 epochs with a batch size of 32. The Adam optimizer is employed. |