Federated Granger Causality Learning For Interdependent Clients With State Space Representation
Authors: Ayush Mohanty, Nazal Mohamed, Paritosh Ramanan, Nagi Gebraeel
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using synthetic data, we conduct comprehensive experiments to demonstrate the robustness of our approach to perturbations in causality, the scalability to the size of communication, number of clients, and the dimensions of raw data. We also evaluate the performance on two real-world industrial control system datasets by reporting the volume of data saved by decentralization. |
| Researcher Affiliation | Academia | Georgia Institute of Technology, Atlanta, GA, USA; EMAIL Oklahoma State University, Stillwater, OK, USA; EMAIL |
| Pseudocode | Yes | A pseudocode for our proposed framework is given in Appendix A.3. Readers can find the code of this paper and associated experiments in https://github.com/federated-interdependency-learning/fed_granger_causality.git |
| Open Source Code | Yes | Readers can find the code of this paper and associated experiments in https://github.com/federated-interdependency-learning/fed_granger_causality.git |
| Open Datasets | Yes | We utilized two ICS datasets (1) HAI: Hardware-the-loop Augmented Industrial control system Shin et al. (2023), and (2) SWa T: Secure Water Treatment Mathur & Tippenhauer (2016). For both of the datasets, clients in our framework corresponds to the processes in the datasets. Details of the raw data are given in Table 6. |
| Dataset Splits | No | The paper describes generating synthetic data and using nominal data for real-world datasets, but does not provide specific training/test/validation splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | No | The paper does not mention specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or their version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | Experiments began by checking convergence stability (ensuring ρ(H) < 1), adjusting hyperparameters if needed. Unless noted otherwise, experiments used two clients (M = 2) with Dm = D = 8, Pm = P = 2 m. Exceptions apply to scalability studies. At training iteration k, the learning of θm uses a gradient descent algorithm as shown in equation 1. There are two partial gradients involved in this step: one corresponding to the augmented client loss (Lm)a with a learning rate of η1, and the other to the server model s loss Ls with a learning rate of η2. The learning of Amn also uses gradient descent with γ as the learning rate. |