HSRMamba: Contextual Spatial-Spectral State Space Model for Single Hyperspectral Image Super-Resolution
Authors: Shi Chen, Lefei Zhang, Liangpei Zhang
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate our HSRMamba outperforms the state-of-the-art methods in quantitative quality and visual results. Code is available at: https: //github.com/Tomchenshi/HSRMamba. Extensive experiments on various datasets demonstrates the superiority and effectiveness of our proposed technique over the state-of-the-art methods. Table 1: Quantitative performance on the Chikusei dataset and Houston dataset at different scale factors. |
| Researcher Affiliation | Academia | 1National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, P. R. China 2Aerospace Information Research Institute, Henan Academy of Sciences {chenshi, zhanglefei, EMAIL} |
| Pseudocode | No | The paper describes methods and architectural components using text and diagrams, along with mathematical equations, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at: https: //github.com/Tomchenshi/HSRMamba. |
| Open Datasets | Yes | Datesets We conducted experiments on three hyperspectral image datasets: Chikusei [Yokoya and Iwasaki, 2016], Houston2018, and Pavia Center [Huang and Zhang, 2009]. |
| Dataset Splits | Yes | For Chikusei dataset, 4 non-overlapping images with the size of 512 512 128 are cropped from the top region. The remaining area is cropped into overlapping HR images for training (10% randomly selected for validation). The spatial size of the LR for training is 32 32, while the corresponding HR sizes at scale factors 4, and 8 are 128 128, and 256 256, respectively. All LR patches are generated by Bicubic downsampling at different scales. Similar to Chikusei dataset, 8 images from the Houston2018 dataset with the size of 256 256 48 are cropped from the top region for testing. The spatial resolution of LR and HR training patches is consistent with the Chikusei dataset. |
| Hardware Specification | Yes | The model is implemented in Pytorch and trained on NVIDIA RTX 4090 GPUs. |
| Software Dependencies | No | The paper mentions 'Pytorch' and 'Adam optimizer' but does not provide specific version numbers for these or any other software components. |
| Experiment Setup | Yes | The kernel size of the convolution is set to 3 3. We set the number of channels C to 64, the number of CSMG to 4, and the number of CSSM to 2. The initial learning rate is 1e 4, halving every 100 epochs until reaching 400 epochs. Following [Zhang et al., 2018], the reduction ratio in channel attention (CA) is set to 16. During training, the Adam optimizer with Xavier initialization is used with a mini-batch size of 8. |