EigenSR: Eigenimage-Bridged Pre-Trained RGB Learners for Single Hyperspectral Image Super-Resolution
Authors: Xi Su, Xiangfei Shen, Mingyang Wan, Jing Nie, Lihui Chen, Haijun Liu, Xichuan Zhou
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that Eigen SR outperforms the state-of-the-art (SOTA) methods in both spatial and spectral metrics. Our main contributions can be summarized as follows: Extensive experiments demonstrate that our method surpasses the current SOTA methods in both spatial and spectral metrics, validating the effectiveness of introducing the pre-trained RGB models. |
| Researcher Affiliation | Academia | Xi Su, Xiangfei Shen, Mingyang Wan, Jing Nie, Lihui Chen, Haijun Liu*, Xichuan Zhou Chongqing University EMAIL, haijun EMAIL |
| Pseudocode | Yes | Algorithm 1: Inference with iterative spectral regularization. |
| Open Source Code | Yes | Code https://github.com/enter-i-username/Eigen SR |
| Open Datasets | Yes | We list the dataset information in Table 1. The ARAD 1K (He et al. 2023), CAVE (Yasuma et al. 2010), and Harvard (Chakrabarti and Zickler 2011) datasets contain high-quality HSIs of indoor and outdoor scenes, each with 31 spectral bands, collected using three different sensors. The remote sensing datasets Pavia2, DC Mall3, and Chikusei (Yokoya and Iwasaki 2016) contain more than 100 channels, but have low spatial resolutions. The RESISC454 includes HR RGB images of remote sensing scenes. 2https://ehu.eus/ccwintco/index.php?title=Hyperspectral Remote Sensing Scenes 3http://lesun.weebly.com/hyperspectral-data-set.html 4https://tensorflow.google.cn/datasets/catalog/resisc45 |
| Dataset Splits | Yes | The ARAD 1K dataset includes 950 HSIs, with the first 900 used as the training set (denoted as ARAD 1K-Train) and the remaining 50 used as the test set (denoted as ARAD 1K-Test). The CAVE and Harvard datasets contain only a few dozen images, and we used them as test sets. We used the RESISC45 dataset as the training set, and the Pavia, DC Mall, and Chikusei datasets as test sets to conduct blind testing on the three datasets without the HR reference image. |
| Hardware Specification | Yes | We used two NVIDIA 3090 cards to run the algorithms in Py Torch. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number or other software dependencies with their versions. |
| Experiment Setup | Yes | We adopted the low-rank adaptation (Lo RA) (Hu et al. 2021) for parameter efficient fine-tuning, applying Lo RA parameters in parallel with all q and v parameters in the pre-trained Transformer (Zhou et al. 2024b), and set the low-rank hyperparameter r to 4 as suggested in the original paper. We fine-tuned on the ARAD 1K-Train dataset for 2000 epochs and on the RESISC45 dataset for 50 epochs, considering the number of training samples. We used the Adam optimizer with a learning rate of 0.001 and the batch size was 64. When testing on unseen data, we set R to 50% of the number of channels L. For Eigen SR-β, the number of iterations Nit was set to 5, and the constant λ was empirically set to 0.8 for SR 2, and 0.4 for SR 4 and 8, respectively. |