PALM: Pushing Adaptive Learning Rate Mechanisms for Continual Test-Time Adaptation
Authors: Sarthak Kumar Maharana, Baoming Zhang, Yunhui Guo
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive image classification experiments on CIFAR-10C, CIFAR100C, and Image Net-C, demonstrating the superior efficacy of our method compared to prior approaches. |
| Researcher Affiliation | Academia | Sarthak Kumar Maharana, Baoming Zhang, Yunhui Guo The University of Texas at Dallas, Richardson, USA EMAIL |
| Pseudocode | No | The paper describes the methodology using equations and prose, but does not contain any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/sarthaxxxxx/PALM |
| Open Datasets | Yes | Following the standard benchmarks set by (Wang et al. 2022), we evaluate our proposed method on CIFAR-10C, CIFAR-100C, and Image Net-C, based on image corruption schemes as set in (Hendrycks and Dietterich 2019). |
| Dataset Splits | Yes | Following the standard benchmarks set by (Wang et al. 2022), we evaluate our proposed method on CIFAR-10C, CIFAR-100C, and Image Net-C, based on image corruption schemes as set in (Hendrycks and Dietterich 2019). ... To maintain fairness regarding the source models, we employ Wide Res Net-28 ... Res Ne Xt-29 ... and Res Net-50 ..., all available on Robust Bench (Croce et al. 2020). |
| Hardware Specification | Yes | With a single NVIDIA A5000 GPU, PALM incurs a slightly higher adaptation time/batch, due to the two-stage proposed method. |
| Software Dependencies | No | The paper mentions using an Adam optimizer and specific model architectures like Wide Res Net-28, Res Ne Xt-29, and Res Net-50, but does not provide specific version numbers for software libraries or dependencies. |
| Experiment Setup | Yes | We optimize using an Adam optimizer, setting base learning rates (κ) to 5e-4 for CIFAR-10C and CIFAR100C, and 5e-5 for Image Net-C. For balancing parameter sensitivities, we set α to 0.5, 0.9, and 0.5 respectively, with temperature coefficients T set to 50, 100, and 1000 respectively. We set η to 1, 0.5, and 0.3, and λ to 0.01 throughout. Batch sizes are set to 200, 200, and 64 for each dataset, following Co TTA, and results are only reported for the highest severity level of 5 for each task. |