Towards Fairness with Limited Demographics via Disentangled Learning

Authors: Zichong Wang, Anqi Wu, Nuno Moniz, Shu Hu, Bart Knijnenburg, Xingquan Zhu, Wenbin Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three benchmark datasets highlight the effectiveness of the proposed method, surpassing state-of-the-art with significant gains in fairness while maintaining comparable utility. We conduct extensive experiments on three real-world benchmark datasets. The results demonstrate that our proposed method outperforms existing baselines across multiple fairness metrics while achieving comparable prediction performance in downstream tasks.
Researcher Affiliation Academia 1Florida International University, FL, USA 2University of Notre Dame, IN, USA 3Purdue University, IN, USA 4Clemson University, SC, USA 5Florida Atlantic University, FL, USA
Pseudocode No The paper describes the methodology using mathematical equations and textual explanations, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We evaluate the effectiveness of our proposed FDVAE framework on three widely used datasets in the fairness domain: Adult [Ding et al., 2021], COMPAS [Larson et al., 2016], and Celeb A [Liu et al., 2015].
Dataset Splits Yes For all datasets, we randomly split the data into 50% training data, 20% validation data, and 30% test data.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processors, or memory used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup No The paper mentions hyperparameters λ and γ and discusses their sensitivity analysis, but it does not provide specific values for hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) or other detailed training configurations used for the main experimental results.