Bias Mitigation Methods: Applicability, Legality, and Recommendations for Development
Authors: Madeleine Waller, Odinaldo Rodrigues, Michelle Seng Ah Lee, Oana Cocarascu
JAIR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper we highlight a number of significant practical limitations and regulatory compliance issues associated with the application of existing bias mitigation methods to ADMS. We present an example of an algorithmic system used in recruitment to illustrate these limitations. Our analysis of existing methods indicates a pressing need for a change in the approach to the development of new methods. In order to address the limitations, we provide recommendations for key factors to consider in the development of new bias mitigation methods that aim to be effective in real-world scenarios and comply with legal requirements in the European Union, United Kingdom and United States, such as non-discrimination, data protection and sector-specific regulations. Further, we suggest a checklist relating to these recommendations that should be included with the development of new bias mitigation methods. |
| Researcher Affiliation | Academia | Madeleine Waller EMAIL Department of Informatics King s College London, UK Odinaldo Rodrigues EMAIL Department of Informatics King s College London, UK Michelle Seng Ah Lee EMAIL Department of Computer Science and Technology University of Cambridge, UK Oana Cocarascu EMAIL Department of Informatics King s College London, UK |
| Pseudocode | No | The paper provides an analysis of existing bias mitigation methods, their applicability, and legality, along with recommendations and a checklist for developing new methods. It does not present a new algorithm or method in structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper focuses on analyzing existing bias mitigation methods and providing recommendations, rather than presenting a new methodology with associated code. There is no mention of releasing source code, providing a repository link, or including code in supplementary materials for the work described. |
| Open Datasets | No | The paper discusses various datasets that are used in the broader algorithmic fairness literature and for evaluating existing bias mitigation methods (e.g., in Section 3.1.4: "Each method includes experiments using a chosen model trained on publicly available datasets"). However, this paper itself does not conduct experiments using a specific dataset for which it provides concrete access information (link, DOI, repository, or formal citation). |
| Dataset Splits | No | The paper does not present its own experimental results that would require dataset splits. It mentions discussions on "different proportions of training/testing data" in Section 3.1.4, but this refers to evaluations conducted in other works, not within this paper's methodology. |
| Hardware Specification | No | The paper is a conceptual and review-based work that analyzes existing bias mitigation methods and provides recommendations. It does not involve running original experiments, and therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper does not describe any specific software or libraries with version numbers, as it is a theoretical and review-based work that does not involve implementing or running computational experiments. |
| Experiment Setup | No | The paper is a conceptual and review-based work, providing analysis and recommendations for bias mitigation methods. It does not present any original experimental setup, hyperparameters, or training configurations. |