Towards Macro-AUC Oriented Imbalanced Multi-Label Continual Learning

Authors: Yan Zhang, Guoqiang Wu, Bingzheng Wang, Teng Pang, Haoliang Sun, Yilong Yin

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, a series of experimental results illustrate the effectiveness of our method over several baselines. Finally, to illustrate the effectiveness of our proposed method, we conduct a series of experiments. Comparisons with other baselines demonstrate the superiority of our approach. Moreover, we have performed ablation studies and experiments to investigate other influencing factors, consistently showing that our method performs well. In this section, we conduct experiments to illustrate the effectiveness of our method, which is summarized as follows: (1) We conduct comparison experiments with other baselines to illustrate the superiority of our method. (2) The memory size is influential to the replay-based approaches. We illustrate that our method consistently outperforms ER and is less sensitive to memory sizes. (3) Ablation studies show the effect of each component proposed in our method.
Researcher Affiliation Academia Yan Zhang, Guoqiang Wu*, Bingzheng Wang, Teng Pang, Haoliang Sun, Yilong Yin* School of Software, Shandong University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Replay-based Continual Learning Procedure. Input: Tasks T , Task length T(T > 1), Memory M, Memory size M; Parameter: Learning rate η, Batch size B, Epochs ne, The model f; Output: Learned parameter Θ of f; 1: for t (1, T) do 2: Get dataset Dt from T t; 3: if t = 1 then 4: Perform Batch Learning on the Dt; 5: else // Continual learning procedure; 6: Get {M1, ..., Mt 1} from M; 7: // Training iteration 8: for i (1, ne) do 9: for each batch Bt Dt (|Bt| = B) do 10: Sample a batch BM with size of B from i (1,t 1)Mi; 11: Update model parameters Θ according to Eq. (19) (Appendix E.2) with Bt, BM, η; 12: // Training ending; 13: Updating the memory according to Sec. ; 14: return Θ;
Open Source Code Yes Code https://github.com/ML-Group-SDU/Macro-AUC-CL
Open Datasets Yes Following previous research in Multi-Label Continual Learning (Kim, Jeong, and Kim 2020; Liang and Li 2022; Dong et al. 2023), we utilize three commonly used multi-label classification datasets: PASCAL VOC (Everingham et al. 2015), MSCOCO (Lin et al. 2014) and NUSWIDE (Chua et al. 2009).
Dataset Splits No The paper mentions that the datasets are "transformed into their continual versions as C-PASCAL-VOC, C-MSCOCO, and C-NUS-WIDE. More details are described in Appendix E.1." However, the provided text does not include Appendix E.1 nor any specific percentages or counts for training/test/validation splits within the main body.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only mentions "experiments" without any hardware context.
Software Dependencies No The paper does not provide specific software dependencies or version numbers (e.g., programming language versions, library versions like PyTorch, TensorFlow, or CUDA versions) required to replicate the experiments.
Experiment Setup Yes Algorithm 1: Replay-based Continual Learning Procedure. Input: Tasks T , Task length T(T > 1), Memory M, Memory size M; Parameter: Learning rate η, Batch size B, Epochs ne, The model f; Output: Learned parameter Θ of f;