Controllable Unlearning for Image-to-Image Generative Models via $\epsilon$-Constrained Optimization

Authors: XiaoHua Feng, Yuyuan Li, Chaochao Chen, Li Zhang, Li, JUN ZHOU, Xiaolin Zheng

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two benchmark datasets across three mainstream I2I models demonstrate the effectiveness of our controllable unlearning framework.
Researcher Affiliation Collaboration 1Zhejiang University, 2Hangzhou Dianzi University, 3Ant Group
Pseudocode Yes Algorithm 1 ε-Constraint Optimization Algorithm
Open Source Code No The paper does not contain any explicit statement about releasing source code or a direct link to a code repository. Statements such as "We release our code..." or a GitHub link are absent.
Open Datasets Yes Datasets: Following (Li et al., 2024a), we conduct experiments on the following two large-scale datasets: i) Image Net-1K (Deng et al., 2009), from which we randomly select 200 classes... ii) Places-365 (Zhou et al., 2017), from which we randomly select 100 classes...
Dataset Splits Yes Image Net-1K (Deng et al., 2009), from which we randomly select 200 classes, designating 100 of these as the forget set and the remaining 100 as the retain set. Each class contains 150 images, with 100 allocated for training and the remaining for validation; and ii) Places-365 (Zhou et al., 2017), from which we randomly select 100 classes, designating 50 of these as the forget set and the remaining 50 as the retain set. Each class contains 5500 images, with 5000 allocated for training and the remaining 500 for validation.
Hardware Specification Yes MAE... Overall, it takes an hour on an NVIDIA A40 (48G) server. VQ-GAN... Overall, it takes two hours on an NVIDIA A40 (48G) server. Diffusion model... Overall, it takes twelve hours on an NVIDIA A40 (48G) server.
Software Dependencies No The paper mentions optimizers like Adam W and Adam and their hyperparameters (e.g., β = (0.90, 0.95)) but does not specify version numbers for other key software components such as Python, PyTorch, TensorFlow, or CUDA.
Experiment Setup Yes MAE. We set the learning rate to 10 4 with no weight decay. Both baselines and our method employ Adam W as the foundational optimizer with β = (0.90, 0.95)... We set the input image resolution to 224 224 and batch size to 32. Simultaneously, we set the coefficient of ψ(θ) in Phase I to α = 5, and the coefficient of ψ in Phase II to β = 5, followed by training for 8 epochs.