Causality-Inspired Disentanglement for Fair Graph Neural Networks

Authors: Guixian Zhang, Debo Cheng, Guan Yuan, Shang Liu, Yanmei Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three widely used datasets demonstrate that CDFG consistently outperforms existing methods, achieving competitive utility and significantly improved fairness. ... 5 Experiments
Researcher Affiliation Academia 1School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, China 2Mine Digitization Engineering Research Center of the Ministry of Education, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, China 3School of Computer Science and Technology, Hainan University, Haikou, Hainan, 570228, China EMAIL, EMAIL
Pseudocode No The paper describes the proposed method in sections 4.1, 4.2, and 4.3 using descriptive text and mathematical equations, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block with structured steps.
Open Source Code Yes Due to space constraints, we have included a detailed description of GNN in the Appendix A . https://github.com/shawn-dm/CDFG/blob/main/Appendix.pdf
Open Datasets Yes We conducted experiments on three widely used real-world datasets, namely German [Dua and Graff, 2017], Bail [Jordan and Freiburger, 2015], and Credit [Yeh and Lien, 2009].
Dataset Splits Yes Consistent with prior studies [Agarwal et al., 2021; Wang et al., 2022], the datasets are partitioned into three distinct phases: training, validation, and testing.
Hardware Specification No The paper mentions training GNNs and the Adam optimizer but does not specify any particular hardware (e.g., GPU models, CPU types) used for the experiments.
Software Dependencies No The paper mentions "The Adam optimization algorithm is applied uniformly across all models." However, it does not specify any software names with version numbers, such as specific deep learning frameworks (e.g., PyTorch, TensorFlow) or their versions.
Experiment Setup Yes We set the hidden layer size uniformly to 16 and κ to 0.5. ... For all three datasets, the model achieves the best fairness metrics when β = 0.2. ... We set γ to {0.001-0.005} and δ to {0.0001-0.0005}. ... For datasets with a higher average node degree (Credit and German), a smaller K-value (α = 3) is effective. For datasets with a lower average node degree (Bail), a larger K-value (α = 6) is necessary.