fairGNN-WOD: Fair Graph Learning Without Complete Demographics

Authors: Zichong Wang, Fang Liu, Shimei Pan, Jun Liu, Fahad Saeed, Meikang Qiu, Wenbin Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three real-world graph datasets illustrate that fair GNN-WOD outperforms state-of-the-art baselines in achieving fairness but also maintains comparable prediction performance.
Researcher Affiliation Academia 1Florida International University, FL, USA 2University of Notre Dame, IN, USA 3University of Maryland Baltimore County, MD, USA 4Northeastern University, MA, USA 5Augusta University, GA, USA
Pseudocode No The paper describes the methodology using textual explanations and mathematical equations, but it does not include a clearly labeled pseudocode block or algorithm box.
Open Source Code No The paper does not contain any explicit statement about releasing source code or provide a link to a code repository for the described methodology.
Open Datasets Yes Our experiments are conducted on three widely used datasets: the Credit dataset [Yeh and Lien, 2009], Pokecz and Pokec-n datasets [Takac and Zabovsky, 2012].
Dataset Splits No To simulate cases of missing demographics, we mask all demographic information in the training and validation sets.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes where λ is the hyperparameter that balances the maximization of the ELBO and the minimization of the penalty term. ... where α and β are tunable hyperparameters controlling the weights of the various elements ... We examine the sensitivity of fair GNN-WOD by adjusting the parameters α and β across the values {1e 3, 1e 2, 1e 1, 1e0, 1e1, 1e2, 1e3}.