Specifying What You Know or Not for Multi-Label Class-Incremental Learning
Authors: Aoting Zhang, Dongbao Yang, Chang Liu, Xiaopeng Hong, Yu Zhou
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments validate that our method effectively alleviates catastrophic forgetting in MLCIL, surpassing the previous stateof-the-art by 3.3% on average accuracy for MS-COCO B0C10 setting without replay buffers. |
| Researcher Affiliation | Academia | 1Institute of Information Engineering, Chinese Academy of Sciences 2VCIP & TMCC & DISSec, College of Computer Science, Nankai University 3School of Cyber Security, University of Chinese Academy of Sciences 4Harbin Institute of Technology 5Tsinghua University EMAIL, EMAIL EMAIL, EMAIL |
| Pseudocode | No | The paper describes the proposed method, HCP, using prose and mathematical formulas, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our codes are available at https://github.com/InfLoop111/HCP. |
| Open Datasets | Yes | HCP is evaluated on MS-COCO 2014 (Lin et al. 2014) and PASCAL VOC 2007 (Everingham 2007) datasets. |
| Dataset Splits | Yes | MS-COCO contains 82,081 training images and 40,137 test images, which covers 80 common objects with an average of 2.9 labels per image. PASCAL VOC contains 5,011 images in the train-val set, and 4,952 images in the test set. |
| Hardware Specification | No | The paper describes training parameters and model architecture but does not specify the hardware (e.g., GPU, CPU models) used for running experiments. |
| Software Dependencies | No | The paper mentions using Adam optimizer and One Cycle LR scheduler, but does not provide specific software versions for libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | We train the model with a batch size of 64 for 20 epochs, using Adam (Kingma and Ba 2014) optimizer and One Cycle LR scheduler with a weight decay of 1e-4. In the base session, we set the learning rate to 4e-5. In the following sessions, it adjusts to 1e-4 for MS-COCO and 4e-5 for VOC. In dynamic feature purification module, we set 3 attention blocks for VOC and 1 for MS-COCO. |