DF-MIA: A Distribution-Free Membership Inference Attack on Fine-Tuned Large Language Models
Authors: Zhiheng Huang, Yannan Liu, Daojing He, Yu Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on three representative LLM models ranging from 1B to 8B on three datasets. The results demonstrate that the DF-MIA significantly enhances the performance of MIA. |
| Researcher Affiliation | Collaboration | 1Harbin Institute of Technology, Shenzhen, China 2Byte Dance, China, 3Zhejiang Univerisity, China |
| Pseudocode | Yes | Algorithm 1: DF-MIA |
| Open Source Code | Yes | Our code is available at https://github.com/HZHKevin/DF-MIA. |
| Open Datasets | Yes | We evaluate our framework on three datasets from various domains: Wikitext-103, AGNews, and XSum. To be specific, the Wikitext-103 (Merity et al. 2017) contains academic writing summaries, the AGNews (Zhang, Zhao, and Le Cun 2015) involves summaries of news topics, and the XSum (Narayan, Cohen, and Lapata 2018) contains document summaries. |
| Dataset Splits | No | To obtain the target models, we follow the method in (Mattern et al. 2023) and fine-tune the based models on each dataset described above. The detailed settings are described in the supplementary material. |
| Hardware Specification | Yes | Our experiments are conducted using 4 NVIDIA A800 GPUs. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, or other libraries used for implementation). |
| Experiment Setup | No | To obtain the target models, we follow the method in (Mattern et al. 2023) and fine-tune the based models on each dataset described above. The detailed settings are described in the supplementary material. |