FDGen: A Fairness-Aware Graph Generation Model
Authors: Zichong Wang, Wenbin Zhang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on four real-world datasets demonstrate that FDGen outperforms state-of-the-art methods, achieving notable improvements in fairness while maintaining competitive generation performance. |
| Researcher Affiliation | Academia | 1Knight Foundation School of Computing and Information Sciences, Florida International University, Miami, USA. |
| Pseudocode | No | The paper describes the proposed framework and processes using mathematical equations and descriptive text, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code, nor does it provide any links to a code repository. |
| Open Datasets | Yes | Four real-world fairness datasets, namely Cora, Citeseer, Photo, and Computer, are used in our experiments. ... In the Cora and Citeseer datasets (Sen et al., 2008) ... The Photo and Computer datasets (Shchur et al., 2018) are segments of the Amazon co-purchase graph... |
| Dataset Splits | No | The paper mentions using Cora, Citeseer, Photo, and Computer datasets for experiments but does not provide specific details on how these datasets were split into training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to conduct the experiments. |
| Software Dependencies | No | The paper mentions using 'GCN as our base model' for node classification, but it does not specify any software versions for GCN or other libraries/frameworks. |
| Experiment Setup | No | The paper mentions hyperparameters like 'a and b as hyperparameters that balance their contributions' and 'λ1 Lf + λ2 Lg + λ3 Ld + λ4 LD' in the training objective, but it does not specify their concrete values, learning rates, batch sizes, number of epochs, or other detailed training configurations. |