One-Shot Heterogeneous Federated Learning with Local Model-Guided Diffusion Models

Authors: Mingzhao Yang, Shangchao Su, Bin Li, Xiangyang Xue

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive quantitation and visualization experiments on three large-scale real-world datasets, along with theoretical analysis, demonstrate that the synthetic datasets generated by Fed LMG exhibit comparable quality and diversity to the client datasets, which leads to an aggregated model that outperforms all compared methods and even the performance ceiling, further elucidating the significant potential of utilizing DMs in FL. ... Table 1. Performance of different methods on Open Image, Domain Net, Unique NICO++, and Common NICO++ under feature distribution skew...
Researcher Affiliation Academia 1Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University. Correspondence to: Bin Li <EMAIL>.
Pseudocode Yes Algorithm 1 Fed LMG: a heterogeneous one-shot Federated learning method with Local model-Guided diffusion models
Open Source Code No The paper does not provide an explicit statement about the release of source code for the methodology, nor does it include a link to a code repository.
Open Datasets Yes We conduct experiments on three large-scale real-world image datasets: Open Image (Kuznetsova et al., 2020), Domain Net (Peng et al., 2019) and NICO++ (Zhang et al., 2023c).
Dataset Splits No The paper describes how clients' data is partitioned based on data domains and categories, but it does not specify explicit training, validation, or test splits for either the client's local datasets or the aggregated synthetic dataset in a manner that would allow direct reproduction of data partitioning for evaluation.
Hardware Specification Yes All experiments are conducted with four NVIDIA Ge Force RTX 3090 GPUs.
Software Dependencies Yes The pre-trained DM we mainly used is Stable-diffusion-v1.5 from the Hugging Face model repository... We also use Stable-diffusion-v2.1 from the Hugging Face model repository and the pre-trained Latent Diffusion Model (Rombach et al., 2022) from Github.
Experiment Setup Yes Regarding specific hyperparameters, the weight λ in the loss function is set to 0.2. The relevant hyperparameters for the diffusion generation process are set to their default values. The number of inference steps is 50, and the guidance scale of the generation is 3.