Misclassification-driven Fingerprinting for DNNs Using Frequency-aware GANs

Authors: Weixing Liu, Shenghua Zhong

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our method achieves a state-of-the-art (SOTA) AUC of 0.98 on the Tiny-Image Net dataset under IP removal attacks, outperforming existing methods by 8%, and consistently achieves the best ABP for three types of IP detection and erasure attacks on the GTSRB dataset.
Researcher Affiliation Academia 1College of Computer Science and Software Engineering, Shenzhen University 2National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University EMAIL, EMAIL
Pseudocode No The paper describes the method using mathematical formulations for loss functions (LG, Ladv, Lmis, LD) and conceptual explanations, but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Our source code is available at https://github.com/wason981/Frequency Fingerprinting.
Open Datasets Yes To validate the effectiveness and robustness of our method, we conduct experiments on CIFAR-10, GTSRB, and Tiny-Image Net.
Dataset Splits Yes CIFAR-10: consists of 60K 32 32 color images in 10 distinct classes, with a training set of 50K images and a test set of 10K images. GTSRB: contains over 50K images of German traffic signs in 43 classes, with 39K images for training and 12K images for test. Tiny-Image Net: contains 110K 64 64 images of 200 different object classes, with 100K images for training and 10K images for test.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., GPU models, CPU types) used for conducting the experiments.
Software Dependencies No The paper does not provide specific software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed to replicate the experiment.
Experiment Setup No The paper states: 'Similar to the settings in [Guan et al., 2022; Lukas et al., 2019; Cao et al., 2021], we select the commonly used model VGG16 as the source model; VGG13, Res Net18, Dense Net121, Mobile Net V2 as irrelevant models. For each setting, we train five models under each stealing attack and average the results across these models to mitigate the impact of randomness.' However, it does not provide specific hyperparameters such as learning rates, batch sizes, number of epochs, or optimizer configurations for training the models.