SSUN-Net: Spatial-Spectral Prior-Aware Unfolding Network for Pan-Sharpening
Authors: Shijie Fang, Hongping Gan
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the significant advantages of our proposed SSUN-Net over the current SOTA methods. Table 1: Comparison of SSUN-Net with other methods on simulated data. The symbol or is used to indicate that a higher or lower value corresponds to a better result. Refer to the Supplementary Material for more details on the comparative evaluation. Table 2: Comparison of SSUN-Net with other methods on real data from the Gao Fen 2. Ablation Study Impact of key components. To evaluate the effectiveness of CFI, ISF, and the Attention Mechanism (ATT) in MSAM and MSEM, we replace them with Dense Block (Huang et al. 2017), which have equivalent parameters. |
| Researcher Affiliation | Academia | Shijie Fang, Hongping Gan* School of Software, Northwestern Polytechnical University, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes the optimization process and the iterative steps of SSUN-Net using mathematical equations and textual explanations, for example, under 'Model Optimization' and 'Deep Unfolding Network' sections. However, it does not include a formal 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Code and Supplementary Materials https://github.com/ICSResearch/SSUN-Net |
| Open Datasets | Yes | Experiment settings. We adopt World View II (WV-II), World View III (WV-III), Gao Fen 2 (GF-2) satellite datasets, and generate reduced resolution simulated data through the Wald (Wald, Ranchin, and Mangolini 1997) protocol for simulation testing that follow the previous works. |
| Dataset Splits | No | The paper mentions the use of World View II, World View III, and Gao Fen 2 satellite datasets and the generation of simulated data through the Wald protocol. It specifies image sizes ("space sizes of PAN and LRMS are 128 128 and 32 32"). However, it does not explicitly state the specific training, validation, or test splits (e.g., percentages or sample counts) used for these datasets. |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware (e.g., GPU models, CPU models, or cloud computing resources) used for running the experiments. |
| Software Dependencies | No | The paper does not explicitly list any specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks like Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | Loss Function We introduce pixel loss and structural loss to jointly penalize the difference between the reconstructed image X and the Ground-Truth image (GT). The pixel loss, LP ixel(θ), is defined as the ℓ1 distance between X and GT: LP ixel(θ) = |X GT|, (27) where θ represents a set of learnable parameters of SSUNNet. In addition, we establish the structural loss, L a(θ), based on the spatial gradient defined in Eq. (9), as follows: L a(θ) = log|( a X) ( a GT)|. (28) Finally, the overall loss function of SSUN-Net is formulated as follows: L(θ) = LP ixel(θ) + λloss L a(θ), (29) where λloss is the weight factor, which is set to 0.1 to optimize the performance of SSUN-Net. |