Position: Editing Large Language Models Poses Serious Safety Risks

Authors: Paul Youssef, Zhixue Zhao, Daniel Braun, Jörg Schlötterer, Christin Seifert

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This position paper argues that editing LLMs poses serious safety risks that have been largely overlooked.
Researcher Affiliation Academia 1Marburg University, Marburg, Germany 2University of Sheffield, Sheffield, UK 3University of Mannheim, Mannheim, Germany.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. Methodologies are described in prose and by referencing other works.
Open Source Code No The paper is a position paper and does not describe a new methodology for which code would be released. It references open-source implementations of Knowledge Editing methods by other authors, such as Fast Edit (Hiyouga, 2023) and Easy Edit (Wang et al., 2024d), but does not provide its own code.
Open Datasets No The paper is a position paper and does not conduct its own experiments or use datasets directly. It references datasets used by other works, such as the 'zsRE dataset' in Section 3.1, but does not provide access information for a dataset used in its own methodology.
Dataset Splits No The paper is a position paper and does not conduct its own experiments, therefore, it does not specify dataset splits.
Hardware Specification No The paper is a position paper and does not describe any experimental setup or hardware used for its own research.
Software Dependencies No The paper is a position paper and does not present a new methodology requiring specific software dependencies with version numbers for its own implementation.
Experiment Setup No The paper is a position paper and does not describe any experimental setup, hyperparameters, or training configurations.