FaAlGrad: Fairness through Alignment of Gradients across Different Subpopulations

Authors: Nikita Malik, Konda Reddy Mopuri

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on multiple benchmark datasets demonstrate significant improvements in fairness metrics without having any exclusive regularizers for fairness.
Researcher Affiliation Academia Nikita Malik EMAIL Department of Information Technology Manipal Institute Of Technology, Manipal Konda Reddy Mopuri EMAIL Department of Artificial Intelligence Indian Institute of Technology Hyderabad.
Pseudocode No The paper does not contain a clearly labeled pseudocode or algorithm block. The methodology is described in natural language and mathematical formulas.
Open Source Code No Sample code for implementing the proposed framework is available at this link. The actual link is missing from the provided text.
Open Datasets Yes To evaluate the effectiveness of our approach, we conducted extensive experiments on four well-known classification datasets: the compas, Communities and Crimes, and the Adult dataset. ... 1. Compas Dataset Pro Publica (2013): ... 2. Adult Dataset Becker & Kohavi (1996): ... 3. Communities and Crime Dataset Redmond (2011): ... 4. German Credit Dataset Hofmann (1994):
Dataset Splits Yes The datasets were divided into training, validation and test sets in a 70%, 15%, 15% split respectively.
Hardware Specification Yes The experiments were conducted on a workstation with an Intel(R) Core(TM) i7-9750H CPU running at 2.60GHz, which features 6 physical cores and 12 logical threads. Additionally, the system was equipped with an NVIDIA Ge Force GTX 1650 GPU and 32 GB of RAM.
Software Dependencies Yes The software environment utilized Python version 3.11.1 for implementing and running the experiments. ... We use the fairlearn library Weerts et al. (2023) to compute the mentioned fairness metrics.
Experiment Setup Yes A fixed learning rate of 0.001 is utilized, and the Re LU activation function is applied during training. ... For the compas dataset, an MLP with one hidden layer, the Re LU activation function, and inner and outer learning rates of 0.01 were used. For the Adult dataset, also similar setup with two hidden layers was used, and the inner and outer loop learning rates were 0.001. For the Communities and Crime dataset, MLP with one hidden layer, tanh activation function, and inner and outer loop learning rates of 0.005 and 0.001 were used. For the German Credit Dataset, MLP with one hidden layer, relu activation function and inner and outer learning rates of 0.005 and 0.1 were used.