Recoverable Facial Identity Protection via Adaptive Makeup Transfer Adversarial Attacks
Authors: Xiyao Liu, Junxing Ma, Xinda Wang, Qianyu Lin, Jian Zhang, Gerald Schaefer, Cagatay Turkay, Hui Fang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our method provides significantly improved attack success rates while maintaining higher visual quality compared to state-of-the-art makeup transfer-based adversarial attack methods. Our code and supplementary materials are available on Github. The paper includes sections like "Experiments", "Experimental Settings", "So TA Benchmarks", "Unauthorised FR Models", "Evaluation Metrics", "Results and Comparison with So TA", "Black-box Attack Evaluation", "Visual Quality Evaluation", "Identity Recovery Evaluation", and "Ablation Study", all of which describe empirical studies and data analysis. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Central South University, China 2School of Software and Microelectronics, Peking University, China 3Department of Computer Science, Loughborough University, U.K. 4Centre for Interdisciplinary Methodologies, University of Warwick, U.K. All authors are affiliated with universities, indicating an academic affiliation. |
| Pseudocode | Yes | Algorithm 1 Adaptive target image selection. Input: source face image s Parameter:68 landmarks of s ls , landmarks images in target set T , L2 distance between ls and lt diss,t , adaptive threshold τ , pre-defined ratio ra ,targets with distances less than τ T<τ Output: adaptive target t 1: ls lm(s) 2: l T lm(T) 3: for t T do 4: dists,t ls, lt 2 5: end for 6: τ Cal thre(ra) 7: T<τ t T where dists,t < τ 8: t random select(T<τ) |
| Open Source Code | Yes | Our code and supplementary materials are available on Github. Github github.com/tttianyu/RMT-GAN |
| Open Datasets | Yes | Following (Hu et al. 2022; Li et al. 2018; Chen et al. 2019), we use the Makeup Transfer (MT) dataset (Li et al. 2018) as our training dataset, which contains 1,115 non-makeup and 2,719 makeup images. Our test data we draw from two public datasets, 500 nonmakeup/500 makeup images from Celeb A-HQ (Karras et al. 2018) and 333 non-makeup/302 makeup images from the LADN dataset (Gu et al. 2019) as source/reference images. |
| Dataset Splits | Yes | Our test data we draw from two public datasets, 500 nonmakeup/500 makeup images from Celeb A-HQ (Karras et al. 2018) and 333 non-makeup/302 makeup images from the LADN dataset (Gu et al. 2019) as source/reference images. |
| Hardware Specification | No | The paper does not explicitly mention the specific hardware used for running its experiments, such as GPU models, CPU models, or cloud computing specifications. It only refers to general FR systems or models without detailing the hardware used for their own method's training or testing. |
| Software Dependencies | No | The paper mentions using FR models like IR152, IRSE50, Face Net, Mobile Face, Face++, and Aliyun, and adopting network architectures from PSGAN, but it does not specify version numbers for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA) used in their implementation. |
| Experiment Setup | No | The paper describes the model architecture and various loss functions (Lgan D, Lgan G, Ladv G, Lcyc G,R, Lrec R, Lgan R, Lhis G, Lsr G,R) along with loss weight hyperparameters (λs). However, it does not provide specific values for these hyperparameters, nor details like learning rate, batch size, or number of epochs in the main text. It states, "More details of network architecture and the training procedure are provided in Supplementary Materials due to the page limitation." |