In-Context Editing: Learning Knowledge from Self-Induced Distributions

Authors: Siyuan Qi, Bangcheng Yang, Kailin Jiang, Xiaobo Wang, Jiaqi Li, Yifan Zhong, Yaodong Yang, Zilong Zheng

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on four datasets, obtaining promising results across four key dimensions: accuracy, locality, generalization, and linguistic quality. Experimental results confirm the effectiveness of ICE and demonstrate its potential for continual editing, ensuring that the integrity of the model is preserved while updating information.
Researcher Affiliation Academia 1State Key Laboratory of General Artificial Intelligence, BIGAI 2University of Science and Technology of China 3Peking University
Pseudocode Yes Algorithm 1: Consistent In-Context Editing (ICE)
Open Source Code No The paper does not contain any explicit statement about providing source code or a link to a code repository.
Open Datasets Yes We evaluate the performance of ICE with four datasets from Know Edit [55], which are commonly used for knowledge insertion and modification. Detailed statistics on the selected datasets can be seen in Table 1. Additionally, we calculate covariance statistics for ROME and MEMIT using a sample of 100,000 entries from the Wikitext1 in fp32 format. Further implementation details can be seen in [28]. 1https://huggingface.co/datasets/Salesforce/wikitext
Dataset Splits Yes Table 1: Statistics on the evaluation datasets. Knowledge Insertion Knowledge Modification Wiki Datarecent Zs RE Wiki Bio Wiki Datacounterfact Type Fact QA Hallucination Counterfact Train 570 10,000 592 1,455 Test 1,266 1230 1,392 885
Hardware Specification Yes All methods can be run on a single Nvidia A100 80GB GPU with 32GB memory and a 128-core AMD CPU.
Software Dependencies No The paper mentions using GPT-4 for context generation and models like Llama2-7b-chat and GPT2-xl, but it does not specify versions for core software libraries, frameworks, or programming languages used for implementation.
Experiment Setup Yes For FT-M, FT-L, and ICE, the optimization proceeds for a maximum of 25 steps with a learning rate of 7e 4 and 0 weight decay. For all results except the ablation study, we used λ = 1.0 for ICE without deliberate tuning.