Hardware and Software Platform Inference
Authors: Cheng Zhang, Hanna Foerster, Robert D. Mullins, Yiren Zhao, Ilia Shumailov
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate HSPI against models served on different real hardware and find that in a whitebox setting we can distinguish between different GPUs with between 83.9% and 100% accuracy. Even in a black-box setting we achieve results that are up to 3 higher than random guess accuracy. |
| Researcher Affiliation | Collaboration | 1Department of Computing, Imperial College London, London, United Kingdom 2Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom 3Google Deep Mind, London, United Kingdom. |
| Pseudocode | No | The paper describes methods (HSPI-BI, HSPI-LD) using narrative text and mathematical equations but does not present a formal pseudocode or algorithm block. |
| Open Source Code | Yes | Our code is available at https: //github.com/Cheng Zhang-98/HSPI. |
| Open Datasets | Yes | Vision Models We mainly use the image classification models from torchvision and fine-tune them on CIFAR10 |
| Dataset Splits | Yes | Before HSPI experiments, we finetune these models on CIFAR10 to ensure they achieve reasonable accuracy on CIFAR10 test set. |
| Hardware Specification | Yes | For actual GPUs, we include NVIDIA H100, A100, Ge Force RTX2080 Ti, and Quadro RTX8000. Not all GPUs were used for all experiments due to different server locations and the difficulty of connecting all of them for the attack generation phase. (...) For actual GPUs, we consider NVIDIA A100, L40S, RTX A6000, and Ge Force RTX3090. |
| Software Dependencies | No | For the version information of software, please refer to our source codes. |
| Experiment Setup | Yes | Specifically, we use SGD optimizer and linear learning rate scheduler with initial learning rate = 1e-3. The fine-tuning batch size is 128 and we fine tune all the models for 3 epochs. |