Effective Interplay between Sparsity and Quantization: From Theory to Practice
Authors: Simla Harma, Ayan Chakraborty, Elizaveta Kostenok, Danila Mishin, Dongho Ha, Babak Falsafi, Martin Jaggi, Ming Liu, Yunho Oh, Suvinay Subramanian, Amir Yazdanbakhsh
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we provide the first mathematical proof that sparsity and quantization are non-orthogonal. We corroborate these results with experiments spanning a range of large language models, including the OPT and LLa MA model families (with 125M to 8B parameters), and vision models like Vi T and Res Net. |
| Researcher Affiliation | Collaboration | Simla Burcu Harma Ayan Chakraborty Elizaveta Kostenok Eco Cloud, EPFL Eco Cloud, EPFL Eco Cloud, EPFL EMAIL EMAIL EMAIL Danila Mishin Dongho Ha Babak Falsafi Eco Cloud, EPFL Mango Boost Inc. Eco Cloud, EPFL EMAIL EMAIL EMAIL Martin Jaggi Ming Liu Yunho Oh Eco Cloud, EPFL Google Korea University EMAIL EMAIL EMAIL Suvinay Subramanian Amir Yazdanbakhsh Google Google Deep Mind EMAIL EMAIL |
| Pseudocode | No | The paper describes methods through mathematical definitions (Definition 3.1, Definition 3.2) and formulas (e.g., Qm(xi, scale) and xi := 0 if |xi| < ΞΎ xi otherwise), but it does not contain a dedicated 'Pseudocode' or 'Algorithm' section or block. |
| Open Source Code | Yes | 1Code and data are available at: https://sq-interplay.github.io/ |
| Open Datasets | Yes | We study the most widely adopted Transformer-based models, including OPT (Zhang et al., 2022b) and LLa MA (Touvron et al., 2023) model families. [...] we fine-tune pre-trained models and evaluate perplexity on the Wiki Text2 (Merity et al., 2017) dataset. [...] and vision models like Vi T and Res Net. [...] on Image Net-1k (Deng et al., 2009). |
| Dataset Splits | No | We fine-tune pre-trained models and evaluate perplexity on the Wiki Text2 (Merity et al., 2017) dataset. The pre-trained LLMs used in our experiments are base (general-purpose) models, not instruct-tuned variants. In addition, we assess non-orthogonality across different metrics of Vi T (Dosovitskiy et al., 2021) and Res Net (He et al., 2016) on Image Net-1k (Deng et al., 2009). (This text mentions the datasets but lacks specific split information like percentages or sample counts. It only implicitly refers to a "test subset" without details.) |
| Hardware Specification | Yes | We conduct our experiments on four NVIDIA A100 GPUs with 80GB memory, and for small models, we use four NVIDIA V100 GPUs with 32GB memory. |
| Software Dependencies | No | The paper mentions optimizers such as Adam and AdamW and uses various models (OPT, LLaMA, ViT, ResNet), but it does not specify version numbers for any software, libraries, or frameworks used in the experiments. |
| Experiment Setup | Yes | We perform full parameter fine-tuning while applying magnitude-based sparsity methods. We find the optimal hyperparameters through grid search for each model and sparsity type and apply the same hyperparameters across all number formats, including FP32. [...] Table 4: Details of the sparse fine-tuning experiments includes specific values for Batch size, Weight decay, Optimizer, FT num. iterations, and Learning rate for different models. |