Unlearning Personal Data from a Single Image

Authors: Thomas De Min, Massimiliano Mancini, Stéphane Lathuilière, Subhankar Roy, Elisa Ricci

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate unlearning approaches in 1-SHUI, we focus on datasets annotated with identity information and for a different downstream task (i.e., face attribute recognition, age classification). The goal is to train a model on the downstream task and to perform identity-aware unlearning while preserving the original model accuracy on the test set. We identify three datasets that satisfy our requirements: Celeb A-HQ (Karras et al., 2017; Lee et al., 2020), Celeb A (Liu et al., 2015), and MUFAC (Choi & Na, 2023). ... Tables 1 to 3 evaluate existing methods and Meta Unlearn in One-Shot Unlearning of Personal Identities.
Researcher Affiliation Academia Thomas De Min University of Trento, Italy EMAIL Massimiliano Mancini University of Trento, Italy Stéphane Lathuilière LTCI, Télécom-Paris, Institut Polytechnique de Paris, France Inria Grenoble, Univ. Grenoble Alpes, France Subhankar Roy University of Trento, Italy Elisa Ricci University of Trento, Italy Fondazione Bruno Kessler, Italy
Pseudocode Yes Algorithm 1 Meta Unlearn pseudocode. def simulate_unlearning(Itr,Dtr): If Itr # sampling w/o replacement Df = {(xj, yj, ij) | ij If} Nf j=1 Dr = Dtr \ Df S = build_support_set(If,Df) return S,Df,Dr def Meta Unlearn_training(Itr,Dtr,Dv): for epoch in range(num_epochs): # iterate all IDs in batches of size NS for it in range( N/NS ) # compute simulated unlearning step S,Df,Dr = simulate_unlearning(Itr,Dtr) M = hϕ(fθ(S)) θu = θ η θM # unlearn S in one step # evaluate meta-loss A = MSE(Ltask(Df; θu), Ltask(Dv; θu)) A += MSE(Ltask(Df; θu), Ltask(Dv; θ)) # update ϕ using gradients ϕ, and lr α ϕ = Adam(ϕ, ϕA, α) return None def Meta Unlearn_unlearning(S): M = hϕ(fθ(S)) θ = θ η θM # unlearn S in one step return θ
Open Source Code Yes Source code available at github.com/tdemin16/one-shui.
Open Datasets Yes To evaluate unlearning approaches in 1-SHUI, we focus on datasets annotated with identity information and for a different downstream task (i.e., face attribute recognition, age classification). ... We identify three datasets that satisfy our requirements: Celeb A-HQ (Karras et al., 2017; Lee et al., 2020), Celeb A (Liu et al., 2015), and MUFAC (Choi & Na, 2023).
Dataset Splits Yes To evaluate methods in 1-SHUI, we split the full dataset D in train Dtr, validation Dv, and test Dte sets with non-overlapping identities, as figure 2 illustrates. Then, we randomly select NS identities from Dtr that serve as the base to construct the forget dataset Df. The remaining IDs are left for the retain set Dr.
Hardware Specification Yes Furthermore, a single A100 Nvidia GPU was used for all experiments.
Software Dependencies No The paper mentions specific optimizers and learning rate schedulers by name and associated citations (e.g., Adam (Kingma, 2014), AMSgrad (Reddi et al., 2018), cosine annealing schedule), and tools like Dropout (Srivastava et al., 2014) and Layer Norm (Ba, 2016). However, it does not provide specific version numbers for any software libraries or frameworks (e.g., PyTorch, TensorFlow, Python version) that were used to implement these.
Experiment Setup Yes For all experiments, we used a Vi T-B/16 (Dosovitskiy et al., 2021) pre-trained on Image Net (Russakovsky et al., 2015). We fine-tuned Vi T on Celeb A and Celeb A-HQ for 30 epochs, using SGD with a learning rate of 1 10 3 and momentum 0.9. The learning rate is initially warm-upped for the first two training epochs and decayed following a cosine annealing schedule for the rest of the optimization. We regularize the optimization using weight decay with penalties of 1 10 3 and 1 10 4 for Celeb A-HQ and Celeb A. Additionally, we augment input images with Random Resized Crop and Random Horizontal Flip to further regularize (He et al., 2016). The same optimization configuration is used for both model pretraining and retraining. ... The meta-loss was trained for 3 epochs, except for one experiment in MUFAC... We used Adam optimizer (Kingma, 2014) with AMSgrad (Reddi et al., 2018) and no weight decay. The learning rate (α) was choosed from {10 4, 10 3, 10 2} and was decayed following a cosine annealing schedule. Instead, the meta-learning rate (η), used when computing the unlearning step, was chosen from {10 3, 0.1}. ... We used Dropout (Srivastava et al., 2014) for regularization with a probability of 0.5...