Causal Logistic Bandits with Counterfactual Fairness Constraints
Authors: Jiajun Chen, Jin Tian, Christopher John Quinn
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We next evaluate the empirical performance of our proposed methods on a synthetic data set. See Appendix H for additional experiments for different values of the constraint threshold τ and tightness parameter ϵ. We evaluated the algorithms using cumulative regret (6), cumulative constraint violations (7), and a penalized form of cumulative regret for different horizons. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Iowa State University, Ames, IA, USA 2Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE. Correspondence to: Christopher John Quinn <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 CCLB Algorithm |
| Open Source Code | Yes | The source code is available at https://github.com/ jchen-research/CCLB. |
| Open Datasets | No | We generated the synthetic dataset from a structural causal model (modified an example from (Plecko & Bareinboim, 2024)). |
| Dataset Splits | No | At every round, we generate a set of 20 feature vectors {[A, W, M, Di]}20 i=1 along with their corresponding counterfactual feature vectors. We use rejection sampling over the sets to make sure that at least twelve of the feature vectors are feasible. |
| Hardware Specification | No | The paper does not provide specific hardware details for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper mentions parameters like 'truncated parameter ρ, step size η = T/ρ, and the initial dual value ϕ1 = 0' for the proposed algorithms and discusses varying 'constraint threshold τ and tightness parameter ϵ' in numerical experiments. However, it does not provide specific numerical values for ρ, η, or other common hyperparameters such as learning rate, batch size, or optimizer settings for the models evaluated. |