Knowledge Swapping via Learning and Unlearning

Authors: Mingyu Xing, Lechao Cheng, Shengeng Tang, Yaxiong Wang, Zhun Zhong, Meng Wang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on various tasks like image classification, object detection, and semantic segmentation validate the effectiveness of the proposed strategy. The source code is available at https://github.com/xingmingyu123456/Knowledge Swapping
Researcher Affiliation Academia 1School of Computer Science and Information Engineering, Hefei University of Technology 2School of Computer Science, University of Nottingham. Correspondence to: Lechao Cheng <EMAIL>, Zhun Zhong <EMAIL>.
Pseudocode No The paper describes the methodology and training protocols using mathematical equations and block diagrams (e.g., Figure 3), but it does not include a formal pseudocode or algorithm block.
Open Source Code Yes The source code is available at https://github.com/xingmingyu123456/Knowledge Swapping
Open Datasets Yes For image classification tasks, the learning set includes: CUB-200-2011 (Wah et al., 2011), Oxford-IIIT Pet (Parkhi et al., 2012), RESISC45 (Cheng et al., 2017) and Plant Village (Geetharamani & Pandian, 2019). Both the retention set and the forgetting set are selected from Image Net-100. For object detection tasks, the learning set consists of CUB200-2011 and Stanford Dogs (Dataset, 2011). Both the retention set and the forgetting set are sourced from the COCO (Lin et al., 2014) dataset. For semantic segmentation tasks, the learning set includes: Pascal VOC (Hoiem et al., 2009), COCO, Oxford-IIIT Pet (Parkhi et al., 2012), and Deep Globe Land (Demir et al., 2018).
Dataset Splits Yes Learning Set: 5 classes, Forgetting Set: 5 classes, Retaining Set: 95 classes Learning Set: 10 classes, Forgetting Set: 10 classes, Retaining Set: 90 classes For each dataset, the learning set and forgetting set include randomly selected 5 classes. The retaining set includes all other classes from ADE20K. The forgetting set consists of five randomly selected classes: person, teddy bear, toilet, bench, and bed, while all remaining classes form the retention set. The learning sets consist of 5 classes: Black-footed Albatross, Laysan Albatross, Sooty Albatross, Groove-billed Ani, and Brewer Blackbird for CUB-200-2011 and Chihuahua, Maltese Dog, Basset, American Staffordshire Terrier, and Norwich Terrier for Stanford Dogs, respectively.
Hardware Specification Yes All experiments are conducted on a hardware setup comprising 2 RTX 4090 GPUs
Software Dependencies Yes software environment configured as Python 3.12, Py Torch 2.5.1, and CUDA 12.4.
Experiment Setup Yes The Adam W optimizer is employed for all training and forgetting phases. For image classification tasks... hyperparameters are set to α = 0.05 and β = 0.2, while BND = 105 in the forgetting phase. For object detection tasks... learning phase employs α = 0.01 and β = 0.9, while the forgetting phase uses BND = 15, α = 0.01, and β = 0.2. For semantic segmentation tasks... learning phase is configured with α = 0.01 and β = 0.9 while the forgetting phase is set to BND = 115, α = 0.01, and β = 0.2.