HIPP: Protecting Image Privacy via High-Quality Reversible Protected Version

Authors: Xi Ye, Lina Wang, Run Wang, Jiatong Liu, Geying Yang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Celeb A, Helen, and LSUN datasets show that the SSIM between the restored and original images achieves 0.9899. Furthermore, compared to the previous works, HIPP achieves the lowest runtime and file expansion rate, with values of 0.07 seconds and 1.1046, respectively.
Researcher Affiliation Academia Xi Ye , Lina Wang , Run Wang , Jiatong Liu and Geying Yang Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, China EMAIL
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. The methods are described in prose and through diagrams like Figure 2, but not in a structured pseudocode format.
Open Source Code No The paper does not explicitly state that the authors are releasing their code for the HIPP methodology. It mentions "https://github.com/ageitgey/face recognition" for a third-party tool used, but not for their own implementation.
Open Datasets Yes Datasets. In the experiments, 50000 images are randomly sampled from Celeb A [Liu et al., 2015], Helen [Le et al., 2012], and LSUN[Yu et al., 2015] datasets and resized to 128 128 and 256 256 to form the training set.
Dataset Splits Yes In the experiments, 50000 images are randomly sampled from Celeb A [Liu et al., 2015], Helen [Le et al., 2012], and LSUN[Yu et al., 2015] datasets and resized to 128 128 and 256 256 to form the training set. Additionally, we select 2000 images from the remaining images to form a testing set.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU specifications, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions using Glow [Kingma and Dhariwal, 2018] and Style GAN [Karras et al., 2019] for image generation, and an Adam optimizer, but does not provide specific version numbers for these software components or any other libraries.
Experiment Setup Yes Implementation Details. During the training procedure of E, we directly apply the pretrained G and G 1 models, keeping their parameters frozen. Meanwhile, an Adam optimizer with β1 = 0, β2 = 0.99, ϵ = 10 8 is applied, the learning rate and iteration are set to 10 5 and 200000, respectively. As for hyperparameters λ1 and λ2, they are set to 10 and 200, respectively, when applying Glow as G. If utilizing Style GAN as G and in-domain GAN inversion as G 1, the values of λ1 and λ2 are changed into 10 and 1000, respectively.