OTPNet: ODE-inspired Tuning-free Proximal Network for Remote Sensing Image Fusion

Authors: Wei Yu, Zonglin Li, Qinglin Liu, Xin Sun

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on nine datasets across three different remote sensing image fusion tasks show that our OTPNet outperforms existing state-of-the-art approaches, which validates the effectiveness of our method.
Researcher Affiliation Academia School of Computer Science and Technology, Harbin Institute of Technology, China EMAIL, EMAIL
Pseudocode No The paper describes the algorithm using mathematical formulations and architectural diagrams (e.g., Figure 1, Figure 2) but does not include a formal pseudocode block or algorithm section with structured, numbered steps.
Open Source Code No The paper states: "For additional implementation details and metrics, please refer to the supplementary materials." but does not explicitly mention releasing code, nor does it provide a link to a code repository.
Open Datasets Yes To comprehensively evaluate our approach, we conduct experiments on nine fusion datasets corresponding to these three tasks: Pan-Sharpening task with World View3, Gao Fen-2, and Quick Bird satellite datasets; HSR task with Pavia Centre, Botswana4, and Chikusei datasets; MHF task with CAVE, Harvard and NTIRE2020 datasets.
Dataset Splits Yes For all datasets, we allocate 90% of the data to the training set and the remaining 10% to the validation set.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU model, CPU type) used for running the experiments.
Software Dependencies No The proposed OTPNet is implemented in Py Torch and trained using the Adam W optimizer with parameters set to β1 = 0.9 and β2 = 0.999.
Experiment Setup Yes The proposed OTPNet is implemented in Py Torch and trained using the Adam W optimizer with parameters set to β1 = 0.9 and β2 = 0.999. The training is conducted over 120k steps in a multistep schedule. The initial learning rate is set to 2e-4 and is reduced by half every 30k iterations. We adopt a batch size of 64 for all experiments.