FlipAttack: Jailbreak LLMs via Flipping
Authors: Yue Liu, Xiaoxin He, Miao Xiong, Jinlan Fu, Shumin Deng, Yingwei Ma, Jiaheng Zhang, Bryan Hooi
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on 8 LLMs demonstrate the superiority of Flip Attack. Remarkably, it achieves 78.97% attack success rate across 8 LLMs on average and 98% bypass rate against 5 guard models on average. This section demonstrates the superiority of Flip Attack through experiments. |
| Researcher Affiliation | Collaboration | 1Integrative Sciences and Engineering Programme, NUS Graduate School, National University of Singapore 2Institute of Data Science (IDS), National University of Singapore 3Department of Computer Science, School of Computing, National University of Singapore 4Moonshot. Correspondence to: Yue Liu <EMAIL>. |
| Pseudocode | No | The paper describes the methodology, including attack disguise module and flipping guidance module, in prose and with a diagram in Figure 2, but does not present explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The codes are available1. 1 https://github.com/yueliu1999/Flip Attack |
| Open Datasets | Yes | We adopt Harmful Behaviors in the Adv Bench dataset, which is proposed by (Zou et al., 2023). It contains 520 prompts for harmful behaviors. Besides, we also have additional experiments on Strong REJECT (Souly et al., 2024). We tested 8 LLMs on the top 200 benign prompts from the Alpaca safe dataset 3 (He et al., 2024). |
| Dataset Splits | No | We adopt Harmful Behaviors in the Adv Bench dataset, which is proposed by (Zou et al., 2023). It contains 520 prompts for harmful behaviors. To facilitate the quick comparison of future work with Flip Attack, we also report the performance on a subset of Adv Bench containing 50 samples. For the data sampling, we follow the same setting of (Mehrotra et al., 2023). This describes the size of the dataset and subsets, but not how it was explicitly split into training/test/validation for the experiments. |
| Hardware Specification | Yes | We conduct all API-based experiments on the laptop with one 8-core AMD Ryzen 7 4800H with Radeon Graphics CPU and 16GB RAM. Besides, all GPU-based experiments are implemented on the server with two 56-core Intel(R) Xeon(R) Platinum 8480CL CPUs, 1024GB RAM, and 8 NVIDIA H100 GPUs. |
| Software Dependencies | No | For closed-source LLMs, we adopt their original APIs to get the responses. For open-source LLMs, we use Deep Infra APIs9. For the closed-source guard model, we use Open AI’s API10. Besides, our proposed method has been added to Microsoft Azure’s Py RIT package. These are API/package names but lack specific version numbers required for a reproducible software environment. |
| Experiment Setup | Yes | Thus, we develop four variants to help LLMs understand and execute harmful intents based on chain-of-thought reasoning, role-playing prompting, and few-shot in-context learning. (A) Vanilla: it simply asks LLMs first to read the stealthy prompt and then recover it based on the rules of different modes. (B) Vanilla+Co T: it is based on Vanilla and further asks LLMs to finish the information recovery task by providing solutions step by step in detail... (C) Vanilla+Co T+Lang GPT: this variant is based on Vanilla+Co T and adopts a role-playing structure... (D) Vanilla+Co T+Lang GPT+Fewshot: this variant is based on Vanilla+Co T+Lang GPT and provides some few-shot demonstrations... Their definitions and prompts are in Section 3.1.2, A.8. |