Slice-and-Pack: Tailoring Deep Models for Customized Requirements

Authors: Ruice Rao, Dingwei Li, Ming Li

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical evaluations show that the S&P can generate highly accurate packed models and expand the market s capacity by many times. We conducted extensive experiments on various datasets for image classification and sentiment analysis. Our code was implemented by Py Torch and executed on an NVIDIA A100 40GB PCIe GPU with AMD EPYC 7H12 64-Core Processor. The implement details are provided in Appendix C. Experimental Results
Researcher Affiliation Academia 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2School of Artificial Intelligence, Nanjing University, China EMAIL
Pseudocode Yes The algorithm is shown in Appendix A.
Open Source Code No The paper does not provide an explicit statement about releasing code for the described methodology, nor does it include a link to a code repository. It only mentions implementation details and the tools used for execution.
Open Datasets Yes We conducted a series of experiments on four different datasets: CIFAR10, CIFAR100 (Krizhevsky, Hinton et al. 2009), TREC (Hovy et al. 2001), and SST-5 (Socher et al. 2013).
Dataset Splits Yes CIFAR10 and CIFAR100 consist of 50,000 images for training and 10,000 for testing. The TREC Question Classification dataset contains 5,500 sentences in the training set and another 500 in the test set, with 6 classes. SST-5 dataset consists of 8,544 sentences in the training set and another 2,210 in the test set, with 5 classes.
Hardware Specification Yes Our code was implemented by Py Torch and executed on an NVIDIA A100 40GB PCIe GPU with AMD EPYC 7H12 64-Core Processor.
Software Dependencies No Our code was implemented by Py Torch and executed on an NVIDIA A100 40GB PCIe GPU with AMD EPYC 7H12 64-Core Processor. While 'Py Torch' is mentioned, no specific version number is provided for the software dependency.
Experiment Setup Yes For each original model, we randomly sample k samples per class from corresponding dataset. More details and experiments with different datasets and architectures are in Appendix D. The experimental tables (e.g., Table 2) show results for specific k values (k = 5, k = 10, k = 20).