L3A: Label-Augmented Analytic Adaptation for Multi-Label Class Incremental Learning

Authors: Xiang Zhang, Run He, Chen Jiao, Di Fang, Ming Li, Ziqian Zeng, Cen Chen, Huiping Zhuang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on MS-COCO and PASCAL VOC datasets demonstrate that L3A outperforms existing methods in MLCIL tasks. Our code is available at https: //github.com/scut-zx/L3A. 4. Experiments
Researcher Affiliation Academia 1Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou, China 2School of Future Technology, South China University of Technology, Guangzhou, China 3Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ). Correspondence to: Huiping Zhuang <EMAIL>.
Pseudocode Yes Algorithm 1 shows the pseudo-code of L3A, which utilizes the PL module to generate overall labels, extracts the sample features, and recursively updates the classifier by WAC. Algorithm 1 Training process of L3A
Open Source Code Yes Our code is available at https: //github.com/scut-zx/L3A.
Open Datasets Yes We follow previous works (Dong et al., 2023; De Min et al., 2024) in MLCIL and evaluate our method on MS-COCO 2014 (Lin et al., 2014) and PASCAL VOC 2007 (Everingham et al., 2010) datasets.
Dataset Splits No Let the phases as {D1, D2, , Dt, }, Dt is divided into a training set Dtrain t and a test set Dtest t . Dtrain t = {(X t,1, yt,1), , (X t,i, yt,i), , (X t,Nt, yt,Nt)} of size Nt, where X t,i represents a input samples tensor, yt,i represents the corresponding multi-hot labels vector. ...The cumulative label space for testing expands incrementally and is defined as C1:t = C1 Ct. (1) MS-COCO B0-C10: the model is trained across all 80 classes, divided into 8 continual learning phases, each learning 10 classes.
Hardware Specification No No specific hardware details (GPU models, CPU types, memory amounts) were explicitly mentioned for running the experiments.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA) were explicitly mentioned in the paper.
Experiment Setup Yes The batch size is set to 64 for MS-COCO and 256 for PASCAL VOC. In all experimental protocols, we set the regularization term γ in Equation (8) to 1000, and the buffer layer size to 8192 for MS-COCO and PASCAL VOC. Table 5. The ablation study on regularization term (γ). Table 6. The ablation study on buffer layer size. Table 7. The ablation study on confidence threshold (η).