Personalized Label Inference Attack in Federated Transfer Learning via Contrastive Meta Learning
Authors: Hanyu Zhao, Zijie Pan, Yajie Wang, Zuobin Ying, Lei Xu, Yu-an Tan
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that the proposed attack has ability to extract local personalized information from the differences before and after finetuning to improve the accuracy of the attack in the absence of a downstream model. Our experiments indicate that CML attack can achieve high attack success rate of 79.11% on the evaluated dataset with Dirichlet distribution parameter α = 0.1. |
| Researcher Affiliation | Academia | 1Beijing Institute of Technology, Beijing, China 2City University of Macau, Macau, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods and processes through narrative text and diagrams (Figure 1), but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code, nor does it include links to code repositories. |
| Open Datasets | Yes | We evaluated the CML label inference attack on 2 large-scale benchmark datasets: CIFAR-10 and CIFAR-100 (Krizhevsky, Hinton et al. 2009). |
| Dataset Splits | Yes | On each dataset, we adopt the setting of the Diriclet non IID data distribution. ... We take α = 0.5, 0.3, and 0.1 in our experiment to illustrate the effect of heterogeneity on attacks. There were 20 clients in involved. ... In the label inference attack, the number of samples per client is 3,000, and our auxiliary data is 500 random label samples, including 150 random samples from the target client. |
| Hardware Specification | Yes | All algorithms were implemented using Py Torch v2.2.0 and executed on an NVIDIA V100 GPU with 32 GB of memory. |
| Software Dependencies | Yes | All algorithms were implemented using Py Torch v2.2.0 and executed on an NVIDIA V100 GPU with 32 GB of memory. |
| Experiment Setup | Yes | Alex Net is chosen for the training model in Fed Rep, whose global model contains 7 layers, which means that the classification layer is excluded. 50% clients are randomly chosen to participate training per round, total training finish at the 30-th round. Each client train 8 rounds, which 4 rounds to fine-tune the head and freeze body, and 4 rounds to freeze the head and fine-tune the body. ... The hyperparameters of the attack model are demonstrated in Table 1. ... Table 1: Dataset Method lr Batch Epoch CIFAR-10 ULIA 0.001 64 600 CIFAR-10 CLIA 0.01 256 600 CIFAR-10 CML(Ours) 0.001 256 600 CIFAR-100 All methods 0.001 256 4000 |