Unified Parameter-Efficient Unlearning for LLMs

Authors: Chenlu Ding, Jiancan Wu, Yancheng Yuan, Jinda Lu, Kai Zhang, Alex Su, Xiang Wang, Xiangnan He

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets demonstrate that LLMEraser excels in efficiently managing various unlearning scenarios while maintaining the overall integrity and efficacy of the models. We conduct experiments on both LLMs and Multimodal Large Language Models (MLLMs)... Extensive evaluations across these diverse scenarios demonstrate that LLMEraser consistently outperforms the state-of-the-art unlearning methods.
Researcher Affiliation Academia 1University of Science and Technology of China 2Hong Kong Polytechnic University 3Mo E Key Laboratory of Brain-inspired Intelligent Perception and Cognition, USTC
Pseudocode Yes The pseudocode for computing parameter changes can be found in Appendix B. Error analysis for our proposed algorithm can be found in Appendix D. Algorithm 1 Calculate Parameter Changes ΘTask
Open Source Code Yes Our code is available at https://github.com/oceanoceanna/LLMEraser.
Open Datasets Yes Our experimental datasets for LLM4Rec unlearning tasks include three commonly used recommendation datasets: Book Crossing (Ziegler et al., 2005), Movie Lens (Harper & Konstan, 2016), and Last FM (Cantador et al., 2011). For MLLMs unlearning tasks, we utilize MMSpu Bench (Ye et al., 2024), and R-Bench (Wu et al., 2024c) with the representative masked instances for evaluation, partitioning the data is into training (60%), validation (20%), and testing (20%) set.
Dataset Splits Yes For MLLMs unlearning tasks, we utilize MMSpu Bench (Ye et al., 2024), and R-Bench (Wu et al., 2024c) with the representative masked instances for evaluation, partitioning the data is into training (60%), validation (20%), and testing (20%) set.
Hardware Specification Yes All methods are run on a single Nvidia A100 GPU.
Software Dependencies No HVP has a corresponding implementation in Py Torch; refer to https://pytorch.org/docs/ stable/autograd.html for details. While PyTorch is mentioned, a specific version number is not provided, and no other software dependencies with version numbers are listed.
Experiment Setup No Algorithm 1 Calculate Parameter Changes ΘTask Input: target data, train data loader, old adapter, loss fun, n, Task, init, lr Output: Parameter changes ΘTask. We choose LLa MA2-7B (Touvron et al., 2023b) as our backbone LLM and LLa VA 1.5-7B (Liu et al., 2023a) for the MLLM experiments. Although Algorithm 1 mentions 'lr' (learning rate) as an input and backbone models are specified, concrete hyperparameter values (e.g., specific learning rate, batch size, number of epochs) used for the experiments are not explicitly stated in the provided text.