Spatial-Spectral Similarity-Guided Fusion Network for Pansharpening

Authors: Jiazhuang Xiong, Yongshan Zhang, Xinxin Wang, Lefei Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of our S3FNet over state-of-the-art methods. The paper includes a dedicated section '4 Experiments' which details '4.1 Experimental Setup', '4.2 Experimental Results', '4.3 Ablation Study', and '4.4 Parameter Study', along with quantitative comparison tables (Table 1 and Table 2) and visual comparison figures (Figure 5).
Researcher Affiliation Academia The authors are affiliated with 'China University of Geosciences', 'University of Macau', and 'Wuhan University', all of which are academic institutions. The email domains are @gmail.com (generic) and @whu.edu.cn (academic).
Pseudocode No The paper describes the methodology and model architecture in detail, including equations for computations within the network (e.g., Equations 1-3 for encoder branches, Equations 4-13 for decoder and CMAFB, and Equations 14-17 for the loss function). However, it does not include any explicitly labeled pseudocode blocks or algorithms with structured steps.
Open Source Code Yes The code is released at https://github.com/Zhang Yongshan/S3FNet.
Open Datasets Yes Three datasets used in this study were obtained from the Pan Collection repository [Deng et al., 2022], including data from the World View-3 (WV3), Gao Fen-2 (GF2), World View-2 (WV2) satellites.
Dataset Splits No The paper describes the synthesis of reduced-resolution data ('synthesized using Wald s protocol, where LRMS and PAN images are downsampled by a factor of 4'), but it does not explicitly provide information on how the datasets were split into training, validation, or test sets with specific percentages, counts, or predefined splits for the experiments.
Hardware Specification Yes Our model was implemented using Py Torch on a mahine with Nvidia 4090 GPU.
Software Dependencies No The paper mentions implementing the model using PyTorch ('Our model was implemented using Py Torch...'), but it does not specify the version number of PyTorch or any other software dependencies.
Experiment Setup Yes The Adam optimizer is used for network training over 300 epochs with a batch size of 16. The initial learning rate is set to 0.001 and halved every 100 epochs. For the loss function, α = 0.001 and β = 0.5, with α decaying during training.