Mjölnir: Breaking the Shield of Perturbation-Protected Gradients via Adaptive Diffusion
Authors: Xuan Liu, Siqi Cai, Qihua Zhou, Song Guo, Ruibin Li, Kaiwei Lin
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that Mj olnir effectively recovers the protected gradients and exposes the Federated Learning process to the threat of gradient leakage, achieving superior performance in gradient denoising and private data recovery. ... Experimental results under the general perturbation protection FL system show that Mj olnir achieves the best gradient denoising quality and privacy leakage ability on commonly used image datasets. |
| Researcher Affiliation | Academia | 1The Hong Kong Polytechnic University, Hong Kong 2Hubei Key Laboratory of Transportation Internet of Things, Wuhan University of Technology, Wuhan, China 3College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China 4Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong |
| Pseudocode | Yes | Algorithm 1: Gradients Extracted for Mj olnir Training; Algorithm 2: Gradient Diffusion Model Training; Algorithm 3: Generate Original Gradient |
| Open Source Code | No | The paper does not contain an explicit statement about releasing its own source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | (B) Benchmarks and Datasets. We employ MNIST, CIFAR100, and STL10 as client privacy datasets, which also serve as the ground truth for privacy leakage evaluation. We extract the unperturbed original gradients ( W) of the aforementioned three datasets from the local training model of the target client as the reference benchmark of gradient denoising under the FL-PP paradigm. The Mj olnir gradient diffusion model is trained with gradients extracted from a separate dataset, Fashion MNIST. |
| Dataset Splits | No | The paper mentions using well-known datasets like MNIST, CIFAR100, STL10, and Fashion MNIST, but it does not specify the exact training/test/validation splits (e.g., percentages, sample counts, or explicit references to standard splits used for their experiments) within the main text. |
| Hardware Specification | Yes | Table 3: Gradient denoising average inference time of Mj olnir variant models and non-diffusion denoising models under FL-PP. (Device: NVIDIA Ge Force RTX 2060 GPU; Intel(R) Core(TM) i7-10870H CPU at 2.20GHz) |
| Software Dependencies | No | The paper describes the methodology and refers to other models (like DDPM) but does not list specific software components with version numbers (e.g., Python, PyTorch, CUDA versions) that would be needed for replication. |
| Experiment Setup | No | The paper describes the conceptual adaptive parameters (M, T, αt) and the objective function. It also specifies Differential Privacy parameters (ε = 1, 5, 10; δ = 10^-5) for the threat model. However, it does not provide specific numerical hyperparameters for training Mj olnir itself, such as learning rates, batch sizes, number of epochs, or the specific optimizer used. |