Transferable Adversarial Face Attack with Text Controlled Attribute

Authors: Wenyun Li, Zheng Zhang, Xiangyuan Lan, Dongmei Jiang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two high-resolution face recognition datasets validate that our TCA2 method can generate natural textguided adversarial impersonation faces with high transferability. We also evaluate our method on real-world face recognition systems, i.e, Face++ and Aliyun, further demonstrating the practical potential of our approach.
Researcher Affiliation Academia 1Harbin Institute of Technology, Shenzhen, China 2Pengcheng Laboratory , Shenzhen, China 3Pazhou Laboratory (Huangpu), Guangzhou, China EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods and algorithms in text and mathematical formulas but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing the source code for the methodology described, nor does it provide a direct link to a code repository. A GitHub link is provided for a dataset, not for the authors' implementation.
Open Datasets Yes We conducted experiments using two publicly available facial datasets: (1) the Celeb A-Identity dataset (Na, Ji, and Kim 2022), which is a subset of the Celeb A-HQ dataset (Huang et al. 2018) [...] (2) The KIDF dataset1, also known as the K-pop Idol Dataset Female (KID-F), consists of approximately 6,000 high-quality facial images of Korean female idols. For our experiments, we selected about 2,000 images representing 100 identities from the KID-F dataset. 1https://github.com/PCEO-AI-CLUB/KID-F
Dataset Splits No The paper states: 'We randomly chose 1,000 images from different identities as source images from both datasets. Additionally, five images were selected as target facial images in each dataset.' This describes the selection of images for attack scenarios but does not provide specific training/test/validation splits for the datasets used to fine-tune the FR models or for the TCA2 framework's components.
Hardware Specification Yes All experiments are conducted using Py Torch on a V100 GPU with 32 GB of memory.
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number for this or any other key software components, which is required for reproducibility.
Experiment Setup Yes For training optimization, we use the Adam optimizer with β1 set to 0.9, β2 set to 0.999, and a learning rate of 0.01. The training process is run for 50 epochs. We set the values of λguide and λperc to 0.5 and 0.05, respectively.