Editable Concept Bottleneck Models

Authors: Lijie Hu, Chenyang Ren, Zhengyu Hu, Hongbin Lin, Cheng-Long Wang, Zhen Tan, Weimin Lyu, Jingfeng Zhang, Hui Xiong, Di Wang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the efficiency and adaptability of our ECBMs, affirming their practical value in CBMs. Comprehensive experiments on benchmark datasets show that our ECBMs are efficient and effective. Our contributions are summarized as follows. To showcase the effectiveness and efficiency of our ECBMs, we conduct comprehensive experiments across various benchmark datasets to demonstrate our superior performance.
Researcher Affiliation Academia 1Provable Responsible AI and Data Analytics (PRADA) Lab 2King Abdullah University of Science and Technology 3Shanghai Jiao Tong University 4Thrust of Artificial Intelligence, The Hong Kong University of Science and Technology (Guangzhou), China Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR, China 5Arizona State University 6Stony Brook University 7The University of Auckland. Correspondence to: Di Wang <EMAIL>.
Pseudocode Yes Algorithm 1 Concept-label-level ECBM ... Algorithm 8 EK-FAC Data-level ECBM
Open Source Code Yes Code is available on https: //github.com/kaustpradalab/ECBM
Open Datasets Yes We utilize three datasets: X-ray Grading (OAI) (Nevitt et al., 2006), Bird Identification (CUB) (Wah et al., 2011), and the Large-scale Celeb Faces Attributes Dataset (Celeb A) (Liu et al., 2015).
Dataset Splits Yes At the concept level, one concept was randomly removed for the OAI dataset and repeated while ten concepts were randomly removed for the CUB dataset, with five different seeds. At the data level, 3% of the data points were randomly deleted and repeated 10 times with different seeds. At the concept-label level, we randomly selected 3% of the data points and modified one concept of each data randomly, repeating this 10 times for consistency across iterations.
Hardware Specification Yes Our experiments utilized an Intel Xeon CPU and an RTX 3090 GPU.
Software Dependencies No The paper discusses various deep learning models and architectures (e.g., Resnet-18) but does not provide specific version numbers for software libraries, frameworks, or operating systems used in the experiments.
Experiment Setup No For all the above datasets, we follow the same network architecture and settings outlined in (Koh et al., 2020).