Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima
Authors: Guangyuan SHI, JIAXIN CHEN, Wenlong Zhang, Li-Ming Zhan, Xiao-Ming Wu
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically evaluate our proposed method for incremental few-shot learning and demonstrate its effectiveness by comparison with state-of-the-art methods. |
| Researcher Affiliation | Academia | Guangyuan Shi, Jiaxin Chen , Wenlong Zhang, Li-Ming Zhan, Xiao-Ming Wu Department of Computing The Hong Kong Polytechnic University EMAIL EMAIL |
| Pseudocode | Yes | Algorithm 1: F2M |
| Open Source Code | Yes | The source code is available at https://github.com/moukamisama/F2M. |
| Open Datasets | Yes | Datasets. For CIFAR-100 and mini Image Net, we randomly select 60 classes as the base classes and the remaining 40 classes as the new classes. ... For CUB-200-2011 with 200 classes, we select 100 classes as the base classes and 100 classes as the new ones. |
| Dataset Splits | No | The paper describes how training and test data are used but does not explicitly define a separate 'validation' dataset split for hyperparameter tuning. It states 'We tune the methods re-implemented by us to the best performance,' implying some validation was performed, but without specific split details. |
| Hardware Specification | Yes | The experiments are conducted with NVIDIA GPU RTX3090 on CUDA 11.0. |
| Software Dependencies | Yes | The experiments are conducted with NVIDIA GPU RTX3090 on CUDA 11.0. |
| Experiment Setup | Yes | In the base training stage, we select the last 4 or 8 convolution layers to inject noise... The ο¬at region bound b is set as 0.01. We set the number of times for noise sampling as M = 2 4... In each incremental few-shot learning session, the total number of training epochs is 6, and the learning rate is 0.02. |