Componential Prompt-Knowledge Alignment for Domain Incremental Learning
Authors: Kunlun Xu, Xu Zou, Gang Hua, Jiahuan Zhou
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on DIL benchmarks demonstrate the effectiveness of our KA-Prompt. Our source code is available at https://github.com/zhoujiahuan1991/ICML2025KA-Prompt. (...) Experimental results (e.g., Fig. 1 (b)) verify that our method effectively enhances the knowledge compatibility between prompts, improving both the acquisition and inference capacity. Extensive experiments on DIL benchmarks show that our KA-Prompt outperforms the existing methods by large margins. (...) (3) Extensive experiments conducted on four DIL benchmarks demonstrate the significant superiority of the proposed KA-Prompt over the state-of-the-art approaches. |
| Researcher Affiliation | Collaboration | 1Wangxuan Institute of Computer Technology, Peking University, Beijing, China 2School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China 3Amazon.com, Inc, Bellevue, WA 98004, USA. Correspondence to: Jiahuan Zhou <EMAIL>. |
| Pseudocode | Yes | A. Algorithm. The overall process of our key phases ΨM and ΨL are shown in Alg. 1 and Alg. 2, respectively. Algorithm 1 Reusable Knowledge Mining (ΨM) (...) Algorithm 2 Aligning-guided New Prompt Learning (ΨL) |
| Open Source Code | Yes | Our source code is available at https://github.com/zhoujiahuan1991/ICML2025KA-Prompt. |
| Open Datasets | Yes | Datasets: Our experiments are conducted on four multi-domain benchmarks including Domain Net, Image Net-R, Image Net-C, and Image Net-Mix. (...) Domain Net (Peng et al., 2019) (...) Image Net-R (Hendrycks et al., 2021) (...) Image Net-C (Hendrycks & Dietterich, 2018) (...) Image Net-Mix (Liu et al., 2024a) |
| Dataset Splits | Yes | Image Net-R (Hendrycks et al., 2021) contains 30,000 images of 200 categories. All images are split into 15 different style domains. The images in each domain are divided into training and testing sets with a 7:3 ratio. (...) Image Net-C (Hendrycks & Dietterich, 2018) contains 1000 categories covering 15 quality corruptions and environmental changes. Following (Liu et al., 2024a), 200 categories identical to Image Net-R in Image Net-C are used to form a DIL benchmark, where each category contains 7,000 images for training and 3,000 images for testing. |
| Hardware Specification | Yes | All experiments are conducted on a single Nvidia 4090 GPU. |
| Software Dependencies | No | The Adam optimizer (β1 = 0.9, β2 = 0.999) is adopted to train the model. The default batch size and learning rate for all benchmarks are set to 128 and 0.005 respectively, except for Domain Net where the learning rate is set to 0.0006. The default training epochs are set to 5 except for Domain Net (10 epochs). The hypermeters τ and λ are set to 0.01 and 0.1 by default, respectively. All experiments are conducted on a single Nvidia 4090 GPU. While the paper mentions the Adam optimizer and ViT model, it does not specify software versions for programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or specific libraries. |
| Experiment Setup | Yes | Implementation Details: We follow the prompt and classifier configuration of C-Prompt (Liu et al., 2024a), e.g., prompt length Lp and prompt number Np of each domain, the shared classifier across all domains. The Adam optimizer (β1 = 0.9, β2 = 0.999) is adopted to train the model. The default batch size and learning rate for all benchmarks are set to 128 and 0.005 respectively, except for Domain Net where the learning rate is set to 0.0006. The default training epochs are set to 5 except for Domain Net (10 epochs). The training images are resized to 224 224. The hypermeters τ and λ are set to 0.01 and 0.1 by default, respectively. |