Do Not DeepFake Me: Privacy-Preserving Neural 3D Head Reconstruction Without Sensitive Images

Authors: Jiayi Kong, Xurui Song, Shuo Huai, Baixin Xu, Jun Luo, Ying He

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that the resulting geometry is comparable to methods using full images, while the process is resistant to Deep Fake applications and facial recognition (FR) systems, thereby proving its effectiveness in privacy protection. We evaluate the quality of our algorithm s geometry recovery through both qualitative and quantitative comparisons with the Vol SDF algorithm.
Researcher Affiliation Academia Jiayi Kong1, Xurui Song1, Shuo Huai2, Baixin Xu1, Jun Luo2, Ying He1* 1S-Lab, Nanyang Technological University, Singapore 2College of Computing and Data Science, Nanyang Technological University, Singapore EMAIL, EMAIL
Pseudocode No The paper describes the algorithmic pipeline through textual descriptions and a visual diagram (Figure 1), but does not contain a formally structured pseudocode block or algorithm section.
Open Source Code No The paper does not explicitly state that source code is available, nor does it provide a link to a code repository or mention code in supplementary materials.
Open Datasets Yes In our experiments, we utilize two representative datasets: Face Scape (Yang et al. 2020) and High Fidelity 3D Head (H3DS) (Ramon et al. 2021). Each dataset includes 30 to 36 RGB images per identity at 64 64 pixels, with lower resolution and increased blurring chosen to enhance privacy protection.
Dataset Splits No In our experiments with the Face Scape dataset, we utilize 10 images that are inherently privacy-neutral and process 20 images as privacy-protected. Similarly, for the H3DS dataset, 16 images are inherently privacy-neutral, and 20 images are processed as privacy-protected. This describes the types and counts of images used for privacy processing, but not specific training, validation, or test splits for model evaluation.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not list specific software dependencies with version numbers, such as programming languages or libraries like Python, PyTorch, or TensorFlow.
Experiment Setup Yes In Stage 1, we set λ3 = λ4 = 0 to disable gradients, and set λ1 = 0 in Stage 2 to activate it. Putting it all together, our training loss is as follows: L = λ1Lrgb + λ2Leik + λ3Llip + λ4Lgrad.