Sp3ctralMamba: Physics-Driven Joint State Space Model for Hyperspectral Image Reconstruction

Authors: Ge Meng, Jingyan Tu, Jingjia Huang, Yunlong Lin, Yingying Wang, Xiaotong Tu, Yue Huang, Xinghao Ding

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both simulated and real datasets demonstrate that Sp3ctral Mamba significantly elevates HSI reconstruction performance to a new level, surpassing SOTA methods in both quantitative and qualitative metrics. Experiments on different datasets demonstrate the effectiveness of Sp3ctral Mamba. Ablation experiments demonstrate the effectiveness of these constraints.
Researcher Affiliation Academia 1 Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China 2School of Informatics, Xiamen University, China 3Institute of Artificial Intelligence, Xiamen University, China EMAIL, EMAIL
Pseudocode No The paper describes its methodology in natural language and mathematical equations, accompanied by architectural diagrams (Figures 2 and 3), but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes For simulated data, we used two widely-used hyperspectral datasets: CAVE (Park et al. 2007) and KAIST (Choi et al. 2017).
Dataset Splits Yes Consistent with the settings in TSA-Net (Meng, Ma, and Yuan 2020), we used the CAVE dataset as the training set and selected 10 scenes from KAIST as the testing set. The patch size during training is 256 256.
Hardware Specification Yes We implemented Sp3ctral Mamba on a PC with a single NVIDIA RTX 4090 GPU
Software Dependencies No While the paper mentions building the network in the PyTorch framework ('we built our network in the Py Torch framework'), it does not specify a version number for PyTorch or any other software components.
Experiment Setup Yes The learning rate was set to 4 10 4 and the batch size was set to 4. In the initial 200 epochs, we used the reconstruction loss Lrec to optimize the predicted results. In the next 50 epochs, we stopped updating the decoder s gradients and introduced the energy prior LE to enhance the encoder s representation of overall pixel intensity. In the final 50 epochs, we did the opposite and introduced the structure prior LS to enhance the decoder s representation of edge details.