CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models

Authors: Song Wang, Peng Wang, Tong Zhou, Yushun Dong, Zhen Tan, Jundong Li

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that the levels of bias vary across these dimensions, thereby providing guidance for the development of specific bias mitigation methods. Our code is provided at https://github.com/Song W-SW/CEB.
Researcher Affiliation Academia Song Wang1 Peng Wang1 Tong Zhou1 Yushun Dong3 Zhen Tan2 Jundong Li1 1University of Virginia, 2Arizona State University, 3Florida State University EMAIL EMAIL EMAIL
Pseudocode No The paper describes methods and processes but does not include any explicit pseudocode or algorithm blocks labeled as such.
Open Source Code Yes Our code is provided at https://github.com/Song W-SW/CEB.
Open Datasets Yes To address these limitations, we collect a variety of datasets designed for the bias evaluation of LLMs, and further propose CEB, a Compositional Evaluation Benchmark with 11,004 samples that cover different types of bias across different social groups and tasks. ...BBQ (Parrish et al., 2022). ...Holistic Bias (Smith et al., 2022). ...Adult (Dua et al., 2017), Credit (Yeh & Lien, 2009), and Jigsaw (Cjadams et al., 2019).
Dataset Splits Yes For each configuration (i.e., the combination of a bias type, a social group, and a task) in our CEB datasets, we evaluate 100 samples.
Hardware Specification Yes We run all experiments on an A100 NVIDIA GPU with 80GB memory.
Software Dependencies Yes For GPT-3.5, we use the checkpoint gpt-3.5-turbo-0613 and for GPT-4, we use the checkpoint gpt-4-turbo-2024-04-09.
Experiment Setup Yes For all the models, we set the max token lengths of the generated output as 512. We set the temperature as 0 for all tasks except Contianutoni and Conversation, for which we set the temperature as 0.8.