Self-cross Feature based Spiking Neural Networks for Efficient Few-shot Learning
Authors: Qi Xu, Junyang Zhu, Dongdong Zhou, Hao Chen, Yang Liu, Jiangrong Shen, Qiang Zhang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the proposed FSL-SNN significantly improves the classification performance on the neuromorphic dataset N-Omniglot, and also achieves competitive performance to ANNs on static datasets such as CUB and mini Image Net with low power consumption. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology,Dalian University of Technology,Dalian,China 2Faculty of Electronic and Information Engineering, Xi an Jiaotong University 3National Key Lab of Human-Machine Hybrid Augmented Intelligence, Xi an Jiaotong University 4State Key Lab of Brain-Machine Intelligence, Zhejiang University. Correspondence to: Jiangrong Shen <EMAIL>. |
| Pseudocode | No | The paper describes the methodology using prose and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We adopt the following datasets for experiments: N-Omniglot, CUB-200-2011, and mini Image Net. N-Omniglot(Li et al., 2022) is a neuromorphic dataset built based on the original Omniglot dataset... CUB-200-2011 is a dataset focused on the fine-grained classification of birds... mini Image Net(Vinyals et al., 2016) is a derived dataset from Image Net... |
| Dataset Splits | Yes | In few-shot classification, FSL is usually defined as a N-way K-shot task (i.e., K labeled samples of N unique classes)... Both Dtrain and Dtest consist of multiple episodes, each containing a query set Q = {(xj, yj)}N K j=1 and support set S = {(xi, yi)}N K i=1 of K image-label pairs for each of the N classes, also known as an N-way K-shot episode. ...CUB-200-2011... 200 categories, of which 100, 50, and 50 are used for training, validation, and testing, respectively. mini Image Net... 100 different object categories. Of these categories, 64 are designated for training, 16 for validation, and 20 are reserved for testing. |
| Hardware Specification | No | The paper discusses energy efficiency and energy consumption calculations, but it does not specify the particular hardware (e.g., GPU, CPU models, or cloud platforms with specs) used to run the experiments for this study. |
| Software Dependencies | No | The paper does not mention any specific software dependencies or version numbers (e.g., Python, PyTorch, TensorFlow versions) used for implementing the research. |
| Experiment Setup | Yes | We train the network in a single-stage way, combining two losses to guide the model to classify precisely: TET-based loss and contrast-based loss... The final loss function combines these two losses, where λ is a hyperparameter that balances the loss terms: LTotal = λLTET + (1 λ)Linfo. (9). ...Table 4 shows our experimental results with different values of λ... We conducted experiments on the N-Omniglot dataset under different time steps and scenario settings for performance evaluation... The experimental results on the SCNN backbone network surpassed maml and siamese, proving the effectiveness of our model architecture. |