Human-Imperceptible, Machine-Recognizable Images
Authors: Fusheng Hao, Fengxiang He, Yikai Wang, Fuxiang Wu, Jing Zhang, Dacheng Tao, Jun Cheng
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on Image Net and COCO show that the proposed paradigm achieves comparable accuracy with the competitive methods. |
| Researcher Affiliation | Academia | Fusheng Hao1,2 , Fengxiang He3 , Yikai Wang4 , Fuxiang Wu1,2 , Jing Zhang5 Dacheng Tao6 , Jun Cheng1,2 1Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 2The Chinese University of Hong Kong 3University of Edinburgh 4Beijing Normal University 5The University of Sydney 6Nanyang Technological University |
| Pseudocode | Yes | The pseudocodes of RS and MI in a Py Torch-like style are shown in the appendix. |
| Open Source Code | Yes | Co-corresponding authors Code: https://github.com/Fusheng Hao/Privacy Preserving ML |
| Open Datasets | Yes | For the image classification task, we benchmark the proposed PEVi T on Image Net-1K [Deng et al., 2009], which contains 1.28M training images and 50K validation images. For the object detection task, we benchmark the proposed PEYOLOS on COCO [Lin et al., 2014], which contains 118K training, 5K validation and 20K test images. |
| Dataset Splits | Yes | For the image classification task, we benchmark the proposed PEVi T on Image Net-1K [Deng et al., 2009], which contains 1.28M training images and 50K validation images. For the object detection task, we benchmark the proposed PEYOLOS on COCO [Lin et al., 2014], which contains 118K training, 5K validation and 20K test images. |
| Hardware Specification | Yes | The throughput is measured as the number of images processed per second on a V100 GPU. FPS is measured with batch size 1 on a single 1080Ti GPU. We adopt the default hyper-parameters of the Dei T training scheme [Touvron et al., 2020] except setting the batch size to 192 per GPU, where 8 NVIDIA A100 GPUs are used for training. |
| Software Dependencies | No | The paper mentions software by name (Timm library, publicly released code in [Fang et al., 2021]) but does not provide specific version numbers for these, nor does it list multiple key software components with their versions. |
| Experiment Setup | Yes | We adopt the default hyper-parameters of the Dei T training scheme [Touvron et al., 2020] except setting the batch size to 192 per GPU, where 8 NVIDIA A100 GPUs are used for training. |