Adapting to Linear Separable Subsets with Large-Margin in Differentially Private Learning

Authors: Erchi Wang, Yuqing Zhu, Yu-Xiang Wang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate this hypothesis, we evaluated the margin of SVM classifiers trained on the CIFAR-10 dataset using pre-trained features from Vision Transformer (Vi T) (Dosovitskiy et al., 2021) and Res Net-50 (He et al., 2016). Interestingly, as reported in Figure 2, while the margin remains at zero for the whole dataset, if we allow removing a few trouble makers (misclassified or other points near the decision boundary), it becomes clear that Vi T-based features achieve a larger margin and increase more quickly than Res Net-50-based features as removing more outliers.
Researcher Affiliation Collaboration 1Halıcıoglu Data Science Institute, UC San Diego 2Linked In. Correspondence to: Yu-Xiang Wang <EMAIL>, Erchi Wang <EMAIL>.
Pseudocode Yes Algorithm 1: AJLGD(Φ, c, S, µ) ... Algorithm 2: AIter(M, Θ, S, µ) ... Algorithm 3: DP Adaptive Margin M (S, ε, δ) ... Algorithm 4: APriv Tune(M, Θ, Q, S, µ) ... Algorithm 5: ANGD(ℓ( ), S, µ): Noisy Gradient Descent (Bassily et al., 2014) ... Algorithm 6: M (S, ε, δ)
Open Source Code No The paper does not explicitly state that code is available, nor does it provide any links to a code repository.
Open Datasets Yes To validate this hypothesis, we evaluated the margin of SVM classifiers trained on the CIFAR-10 dataset using pre-trained features from Vision Transformer (Vi T) (Dosovitskiy et al., 2021) and Res Net-50 (He et al., 2016).
Dataset Splits No The paper mentions using "Classes 1 and 9 from the CIFAR10 training set" but does not specify how the training, validation, or test splits were created or used for their experiments. No explicit percentages, sample counts, or references to predefined splits for their specific experimental setup are provided.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or version numbers for libraries or frameworks used in the implementation.
Experiment Setup No The paper describes the general approach of training linear SVM classifiers with different preprocessing methods and an iterative outlier removal procedure. However, it does not provide specific hyperparameters for the SVM training (e.g., regularization parameters, learning rates, number of iterations) or other detailed configuration settings required for reproducibility.