Adaptive Dataset Quantization
Authors: Muquan Li, Dongyang Zhang, Qiang Dong, Xiurui Xie, Ke Qin
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on CIFAR-10, CIFAR-100 (Krizhevsky, Hinton et al. 2009), Image Net-1K (Russakovsky et al. 2015) and Tiny-Image Net (Le and Yang 2015) substantiate a marked enhancement in performance over the baseline DQ by average 3%, establishing the new state-of-the-art results. |
| Researcher Affiliation | Academia | Institute of Intelligent Computing, University of Electronic Science and Technology of China, China EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Adaptive Dataset Quantization |
| Open Source Code | Yes | Code https://github.com/SLGSP/ADQ |
| Open Datasets | Yes | Datasets Following the evaluation protocol of previous DQ (Zhou et al. 2023), we utilize image classification as a proxy task for evaluation and mainly assess our method on CIFAR-10 (Krizhevsky, Hinton et al. 2009) and Image Net-1K (Russakovsky et al. 2015). |
| Dataset Splits | Yes | CIFAR-10 contains 50,000 samples for training and 10,000 samples for validation, with a resolution of 32 × 32. Image Net-1K comprises 128,1126 samples from 1000 categories for training, with each category containing 50 images for validation. |
| Hardware Specification | No | The paper mentions 'GPU hours' in Table 2, but does not specify any particular GPU models or other hardware components used for the experiments. |
| Software Dependencies | No | The paper mentions models like Res Net-18 and Vision Transformer, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | For comparison, we conduct training for 200 epochs on the CIFAR-10 with batch size 128, and we employ a cosineannealed learning rate that initializes at 0.1. |