Salient Frequency-aware Exemplar Compression for Resource-constrained Online Continual Learning
Authors: Junsu Kim, Suhyun Kim
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments employing the baseline OCIL method on benchmark datasets such as CIFAR-100 and Mini-Image Net demonstrate the superiority of SFEC over previous exemplar compression methods in streaming scenarios. |
| Researcher Affiliation | Academia | 1Korea University, Republic of Korea 2Kyung Hee University, Republic of Korea EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology in text and mathematical equations, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing the code for the described methodology, nor does it provide a direct link to a code repository. It mentions 'SK hynix. 2023. HMSDK Github. https://github.com/skhynix/hmsdk.' but this appears to be a reference to a third-party SDK, not the authors' own code. |
| Open Datasets | Yes | The experiments are conducted on two benchmark datasets: CIFAR-100 (Krizhevsky, Hinton et al. 2009) and Mini-Image Net (Vinyals et al. 2016). |
| Dataset Splits | Yes | The datasets are split into ten tasks, and each task consists of 10 classes. ... The performance of the model is estimated by a held-out dataset of {D1, ..., DN} after each task T . |
| Hardware Specification | No | The paper mentions 'edge platforms are constrained by limited computational power' and 'occupy the limited computation resources (e.g., GPUs)', but does not provide specific hardware models (e.g., GPU/CPU models, memory amounts) used for the experiments. |
| Software Dependencies | No | The paper states that the network is 'utilizing a learning rate of 0.1 with the SGD optimizer' and mentions 'Reduced Res Net18' as the backbone model. However, it does not specify software versions for libraries like Python, PyTorch, TensorFlow, or CUDA. |
| Experiment Setup | Yes | The backbone model is Reduced Res Net18, which is consistent with numerous studies in online continual learning... The network is trained on samples drawn from both the data stream and the memory, utilizing a learning rate of 0.1 with the SGD optimizer. For other hyperparameters of the existing methods, we follow the papers... For instance, MRDC sets five quality candidates for JPEG compression, which are 10, 25, 50, 75, 90. CIM uses a compression ratio of 4.0 for non-discriminative pixels. We use a quality level of 75 for naive JPEG, which is the default value. Meanwhile, SFEC utilizes the same patch size (8 x 8) and the YUV conversion weights with JPEG. In this paper, we conducted all experiments using λ = 1. We examine ER-ACE as the baseline since it requires the same computational overhead as ER, the simplest OCIL strategy, while achieving superior performance than ER (Caccia et al. 2022). ER-ACE performs a single gradient update over a batch from the stream and a batch from the memory. |