Federated Learning with Efficient Local Adaptation for Realized Volatility Prediction
Authors: Lei Zhao, Lin Cai, Wu-Sheng Lu
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental evaluations demonstrate FLARE-LA s superior performance, showcasing its ability to significantly enhance post-FL outcomes compared to state-of-the-art FL algorithms. The results underscore FLARE-LA s unique capability to drive advancements in financial forecasting and other high-stakes, rapidly evolving domains. |
| Researcher Affiliation | Academia | Lei Zhao EMAIL Department of Electrical and Computer Engineering University of Victoria Lin Cai EMAIL Department of Electrical and Computer Engineering University of Victoria Wu-Sheng Lu EMAIL Department of Electrical and Computer Engineering University of Victoria |
| Pseudocode | Yes | Algorithm 1 Federated Learning with Adaptive Robustness and Efficiency for Local Adaptation (FLARE-LA) |
| Open Source Code | No | The paper does not contain any explicit statements about making the source code available, nor does it provide links to a code repository. |
| Open Datasets | Yes | We first utilize a dataset for realized volatility prediction, consisting of order book and trade data from multiple trading platforms. These experiments aim to demonstrate FLARE-LA s ability to handle extreme data heterogeneity, dynamic participation, and the fragmented nature of financial datasets while maintaining robust predictive performance. To further evaluate the generalizability of FLARE-LA, we extend our experiments to CIFAR10 and MNIST, two well-established datasets in FL research. |
| Dataset Splits | Yes | Each trading platform randomly splits its data into a training set and a test set, with 20% allocated for testing. This setup allows us to estimate the performance of each FL algorithm on each trading platform s test set using its personalized model. For both schemes, the test sets remain clean to ensure a fair and accurate evaluation of model performance. |
| Hardware Specification | Yes | All experiments were conducted on an experimental platform featuring an 8-core CPU, a 14-core GPU, and 16GB of RAM. This setup ensures consistent benchmarking across all evaluated FL methods. |
| Software Dependencies | No | The paper mentions using the "Res Net model" for local training but does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers. |
| Experiment Setup | Yes | Input: Global model initialization w0, learning rates αl, {αt g}, participation distribution, local datasets {Dc}c C, regularization parameter λ 0, ill-condition tolerance ε > 0. The Dirichlet distribution s concentration parameter, α, determines the stock distribution for each trading platform which is set to 0.5 in our experiments. In Fig. 1, we compare the performance of FLARE-LA against Individual Train and baseline FL methods as Fed Prox, SCAFFOLD, and Fed Per over 50 epochs as shown in Fig. 1(a) and 200 epochs as shown in Fig. 1(b). As shown in Fig. 2, FLARE-LA consistently outperforms baseline methods, including Fed Prox, SCAFFOLD, Fed Per, and Su Per Fed, in terms of Mean Loss, Va R95%, and CVa R95% for realized volatility prediction tasks. |