Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precision
Authors: Xijie Huang, Zhiqiang Shen, Pingcheng Dong, Kwang-Ting Cheng
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We extensively verify and demonstrate our scheme can alleviate the variation and improve the performance of transformers across various models and tasks. For the 2-bit Swin-T and binary BERT-base, our solutions achieve a 3.35% and 1.4% accuracy improvement over previous state-of-the-art methods on the Image Net-1K dataset and GLUE benchmark. |
| Researcher Affiliation | Academia | Xijie Huang1, Zhiqiang Shen2, Pingcheng Dong1, Tim Kwang-Ting CHENG1 1Hong Kong University of Science and Technology, 2Mohamed bin Zayed University of Artificial Intelligence |
| Pseudocode | No | The paper describes methods using equations and prose but does not include any explicitly labeled pseudocode, algorithm blocks, or structured code-like procedures. |
| Open Source Code | Yes | Codes and models are available at https://github.com/Huang Owen/Quantization-Variation. |
| Open Datasets | Yes | The experiments are carried out on the Image Net-1K dataset (Deng et al., 2009) and GLUE benchmark (Wang et al., 2018). |
| Dataset Splits | Yes | The experiments are carried out on the Image Net-1K dataset (Deng et al., 2009) and GLUE benchmark (Wang et al., 2018). |
| Hardware Specification | Yes | The total training time for our Dei T-T with 4 NVIDIA A100 GPUs is 57.3 hours, significantly lower than baseline methods shown in Table 5. ... Table 15: GPU memory consumption and training time per epoch of Swin-T with on a single NVIDIA 80G A100 GPU. |
| Software Dependencies | No | The paper does not provide specific version numbers for any key software components (e.g., Python, PyTorch, CUDA libraries) used in the experiments. It mentions 'Adam W' as an optimizer but without a version. |
| Experiment Setup | Yes | Table 10: Detailed hyper-parameters and training scheme for different tasks in GLUE benchmark. ... Table 11: Detailed hyper-parameters and training scheme for different Vi T architectures. |