Learning Robust and Privacy-Preserving Representations via Information Theory

Authors: Binghui Zhang, Sayedeh Leila Noorbakhsh, Yun Dong, Yuan Hong, Binghui Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations We evaluate ARPRL on both synthetic and real-world datasets. The results on the synthetic dataset is for visualization and verifying the tradeoff purpose. Experimental Setup We train the neural networks via Stochastic Gradient Descent (SGD)... Results. Tables 1 shows the results on the three datasets, where we report the robust accuracy (under the l attack), normal test accuracy, and attribute inference accuracy (as well as the gap to random guessing).
Researcher Affiliation Academia 1Illinois Institute of Technology 2Milwaukee School of Engineering 3University of Connecticut EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Figure 1 overviews our ARPRL. Algorithm 1 in Appendix details the training of ARPRL.
Open Source Code Yes Code&Full Report https://github.com/ARPRL/ARPRL
Open Datasets Yes We use three real-world datasets from different applications, i.e., the widely-used Celeb A (Liu et al. 2015) image dataset (150K training images and 50K for testing), the Loans (Hardt, Price, and Srebro 2016), and Adult Income (Dua and Graff 2017) datasets
Dataset Splits Yes Each circle indicates a class and has 5,000 samples, where 80% of the samples are for training and the rest 20% for testing. ... Celeb A (Liu et al. 2015) image dataset (150K training images and 50K for testing)
Hardware Specification Yes We implement ARPRL in Py Torch and use the NSF Chameleon Cloud GPUs (Keahey et al. 2020) (Cent OS7CUDA 11 with Nvidia Rtx 6000) to train the model.
Software Dependencies Yes We implement ARPRL in Py Torch and use the NSF Chameleon Cloud GPUs (Keahey et al. 2020) (Cent OS7CUDA 11 with Nvidia Rtx 6000) to train the model.
Experiment Setup Yes We train the neural networks via Stochastic Gradient Descent (SGD), where the batch size is 100 and we use 10 local epochs and 50 global epochs in all datasets. The learning rate in SGD is set to be 1e 3.