Federated Causally Invariant Feature Learning
Authors: Xianjie Guo, Kui Yu, Lizhen Cui, Han Yu, Xiaoxiao Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on synthetic and real-world datasets demonstrate the superiority of Fed CIFL against eight state-of-the-art baselines, beating the best-performing approach by 3.19%, 9.07% and 2.65% in terms of average test Accuracy, RMSE and F1 score, respectively. ... 4 Experimental Evaluation |
| Researcher Affiliation | Academia | 1School of Computer Science and Information Engineering, Hefei University of Technology, China 2Key Laboratory of Knowledge Engineering with Big Data of Ministry of Education, China 3School of Software, Shandong University, China 4College of Computing and Data Science, Nanyang Technological University, Singapore 5Department of Electrical and Computer Engineering, The University of British Columbia, Canada 6Vector Institute, Canada EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | The detailed pseudocode of Fed CIFL is provided in Appendix C |
| Open Source Code | Yes | The source code is available at https://github.com/Xianjie-Guo/Fed CIFL. |
| Open Datasets | Yes | Real-world data. We also compare Fed CIFL with the baselines on the Amazon Review dataset. ... In our experiments, we use the preprocessed version of the Amazon Review dataset reported in (Wang et al. 2018) |
| Dataset Splits | Yes | To emulate Scenario 3 (i.e., Pck1 = Pck2 Pck1 = Ptest for k1 = k2, k1, k2 {1, 2, . . . , m}), we set rck = 0.4 and rtest = 0.9. To emulate Scenario 4 (i.e., Pck1 = Pck2 Pck1 = Ptest Pck2 = Ptest for k1 = k2, k1, k2 {1, 2, . . . , m}), we set rtest = 0.9 and then uniformly assign different bias rates to each client within the interval [0.1, 0.7] using the following equation: rck = 0.1+(k 1) 0.7 0.1 m 1 , k {1, 2, . . . , m}. ... For example, DEK B indicates that the D, E and K domain datasets are used as the FL training data, and the B domain dataset is used as the testing data. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used (e.g., GPU/CPU models, memory) for running its experiments within the provided text. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) in the provided text. |
| Experiment Setup | No | While the paper mentions balancing parameters λ1, λ2, λ3, λ4, λ5 in equations (2), (4), and (5), it does not provide their specific numerical values used in the experiments within the main text. It refers to "Implementation details of the Fed CIFL algorithm and the baselines are provided in Appendix E," but these details are not available in the provided text. |