Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator

Authors: xin zhang, Jiawei Du, Ping Liu, Joey Tianyi Zhou

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across CIFAR, tiny-Image Net and Image Net-1k datasets demonstrate the state-of-the-art performance of INFER. For instance, in the ipc = 50 setting on Image Net-1k with the same compression level, it outperforms SRe2L by 34.5% using Res Net18. All experiments were conducted using two Nvidia 3090 GPUs and one Tesla A-100 GPU.
Researcher Affiliation Academia Xin Zhang1,2 Jiawei Du1,2 Ping Liu3 Joey Tianyi Zhou1,2 B 1Centre for Frontier AI Research, Agency for Science, Technology and Research, Singapore 2Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore 3University of Nevada, Reno EMAIL EMAIL
Pseudocode Yes Algorithm 1 Distillation on synthetic dataset via Inter-class Feature Compensator (INFER) Require: Target dataset T ; Number of subsets K; Number of classes C; M networks with different architectures:{f 1, f 2, , f M}.
Open Source Code Yes Codes are available at https://github.com/zhangxin-xd/UFC.
Open Datasets Yes We conduct the comparison with several representative distillation methods... This evaluation is performed on four popular classification benchmarks, including CIFAR-10/100 (Krizhevsky et al., 2009), Tiny-Image Net (Le & Yang, 2015), and Image Net-1k (Deng et al., 2009).
Dataset Splits No The performance is measured as the Top-1 accuracy of Res Net-18 (Conv Net128 for MTT) on the respective validation sets, trained from scratch using synthetic datasets. For reproducibility, the hyperparameter settings for the experimental datasets CIFAR-10/100, Tiny-Image Net, and Image Net-1k, are provided in Appendix A.3. These settings generally follow SRe2L (Yin et al., 2024), with the sole modification being a proportional reduction in the validation epoch number for the dynamic version to ensure fair comparison.
Hardware Specification Yes All experiments were conducted using two Nvidia 3090 GPUs and one Tesla A-100 GPU.
Software Dependencies No Our INFER uses M = 4, meaning it employs four different architectures for optimizing UFCs: Res Net18 (He et al., 2016), Mobile Netv2 (Sandler et al., 2018), Efficient-Net B0 (Tan & Le, 2019), and Shuffle Netv2 (Ma et al., 2018). When distilling Image Net-1k, only the first three architectures (M = 3) are involved.
Experiment Setup Yes For reproducibility, the hyperparameter settings for the experimental datasets CIFAR-10/100, Tiny-Image Net, and Image Net-1k, are provided in Appendix A.3. These settings generally follow SRe2L (Yin et al., 2024), with the sole modification being a proportional reduction in the validation epoch number for the dynamic version to ensure fair comparison.