Rethinking Fair Representation Learning for Performance-Sensitive Tasks

Authors: Charles Jones, Fabio De Sousa Ribeiro, Mélanie Roschewitz, Daniel Castro, Ben Glocker

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We run experiments across a range of medical modalities to examine the performance of fair representation learning under distribution shifts. Our results explain apparent contradictions in the existing literature and reveal how rarely considered causal and statistical aspects of the underlying data affect the validity of fair representation learning. We raise doubts about current evaluation practices and the applicability of fair representation learning methods in performance-sensitive settings. We argue that fine-grained analysis of dataset biases should play a key role in the field moving forward.
Researcher Affiliation Collaboration Charles Jones1, , Fabio De Sousa Ribeiro1, M elanie Roschewitz1, Daniel C. Castro2 & Ben Glocker1, 1Department of Computing, Imperial College London, UK 2Microsoft Research Health Futures, Cambridge, UK
Pseudocode No The paper describes methods and processes but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about the release of source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We adapt the experimental setup from Jones et al. (2023), consisting of five datasets across the modalities of chest X-ray (Che Xpert, MIMIC; Irvin et al., 2019; Johnson et al., 2019), dermatoscopy (HAM10000, Fitzpatrick17k; Tschandl et al., 2018; Groh et al., 2021; Groh et al., 2022), and fundus imaging (PAPILA; Kovalyk et al., 2022).
Dataset Splits No The training and testing datasets are generated by randomly splitting the unbiased variant of each dataset. This statement indicates splitting occurred but does not provide specific percentages, counts, or the methodology (e.g., random seed) needed for exact reproduction.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies Yes torchvision==0.18.1 adjust sharpness implementation
Experiment Setup Yes Table 1: Hyperparameters used across all runs in 5. Config Value Architecture Res Net18 (He et al., 2016) Optimiser Adam W (Loshchilov & Hutter, 2018) {lr: 1e 4, β1: 0.9, β2: 0.999} Adversarial coefficients {Marginal FRL: 1.0, Conditional FRL: 0.05} LR Schedule Constant Max Epochs 50 Early Stopping {Monitor: worst group AUC, Patience: 5 epochs} Augmentation Random Resized Crop, Random Rotation(15o) Batch Size 256 (32 for PAPILA)