Optimal Fair Learning Robust to Adversarial Distribution Shift
Authors: Sushant Agarwal, Amit Deshpande, Rajmohan Rajaraman, Ravi Sundaram
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We demonstrate in Claim 1 (Section 3.1) that the deterministic Fair BOC is not robust to adversarial noise, corroborating Konstantinov & Lampert (2022). Our main results prove the robustness of randomized Fair BOC s. We prove in Theorems 1 (Section 3.2), 2 and 3 (Section 4) that the accuracy of the randomized Fair BOC is robust to malicious noise across three popular fairness notions (Demographic Parity, Equal Opportunity, and Predictive Equality). |
| Researcher Affiliation | Collaboration | 1Northeastern University 2Microsoft Research. Correspondence to: Sushant Agarwal <EMAIL>. |
| Pseudocode | No | The paper describes methods and characterizations in prose and uses mathematical notation, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements about releasing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper is theoretical and focuses on mathematical characterizations of fair learning. It does not conduct empirical studies with specific datasets, therefore it does not provide access information for any open datasets used in its own methodology. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments on datasets that would require specific training/test/validation splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental setup or the hardware used to run experiments. |
| Software Dependencies | No | The paper is theoretical and does not specify any software dependencies with version numbers for experimental reproducibility. |
| Experiment Setup | No | The paper is theoretical, presenting proofs and claims rather than empirical experiments, and thus does not describe any experimental setup details such as hyperparameters or training configurations. |