ARB-LLM: Alternating Refined Binarizations for Large Language Models

Authors: Zhiteng Li, Xianglong Yan, Tianao Zhang, Haotong Qin, Dong Xie, Jiang Tian, zhongchao shi, Linghe Kong, Yulun Zhang, Xiaokang Yang

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our ARB-LLMRC (ARB-RC + CGB) significantly outperforms SOTA binary PTQ methods while requiring less memory. Furthermore, ARBLLMRC, for the first time, surpasses same-size FP16 models on zero-shot QA datasets.
Researcher Affiliation Collaboration 1Shanghai Jiao Tong University, 2ETH Z urich, 3Lenovo Research
Pseudocode Yes Algorithm 1 First-Order Alternating Refined Binarization
Open Source Code Yes Code: https://github.com/ZHITENGLI/ARB-LLM.
Open Datasets Yes Following Frantar et al. (2023) and Huang et al. (2024), we use 128 samples from C4 (Raffel et al., 2020) dataset as calibration data. ... We measure the perplexity of LLM s outputs on Wiki Text2 (Merity et al., 2017), PTB (Marcus et al., 1994), as well as a part of the C4 (Raffel et al., 2020) data.
Dataset Splits Yes Following Frantar et al. (2023) and Huang et al. (2024), we use 128 samples from C4 (Raffel et al., 2020) dataset as calibration data. ...We measure the perplexity of LLM s outputs on Wiki Text2 (Merity et al., 2017), PTB (Marcus et al., 1994), as well as a part of the C4 (Raffel et al., 2020) data. Moreover, we also evaluate the accuracy for 7 zero-shot QA datasets: ARC-c (Clark et al., 2018), ARC-e (Clark et al., 2018), Bool Q (Clark et al., 2019), Hellaswag (Zellers et al., 2019), OBQA (Mihaylov et al., 2018), PIQA (Bisk et al., 2020), and Winogrande (Sakaguchi et al., 2020).
Hardware Specification Yes All the experiments are conducted with Py Torch (Paszke et al., 2019b) and Huggingface (Paszke et al., 2019a) on a single NVIDIA A800-80GB GPU.
Software Dependencies No All the experiments are conducted with Py Torch (Paszke et al., 2019b) and Huggingface (Paszke et al., 2019a) on a single NVIDIA A800-80GB GPU. No specific version numbers for PyTorch or Huggingface are provided.
Experiment Setup Yes We implement 15 iterations for ARB-LLMX and ARB-LLMRC to ensure the convergence of binarization parameters. Following Frantar et al. (2023) and Huang et al. (2024), we use 128 samples from C4 (Raffel et al., 2020) dataset as calibration data.