Toward Linearly Regularizing the Geometric Bottleneck of Linear Generalized Attention
Authors: Jiaxu Liu, Xinping Yi, Xiangyu Yin, Yuhang Song, Gaojie Jin, Xiaowei Huang
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on sequence and graph benchmarks demonstrate that BRL-Attention matches or surpasses the predictive performance of standard Transformers with full attention, while substantially reducing memory usage and computation time to levels comparable with linear sparse attention. Extensive experiments (Sec. 3), including long-sequence modeling and large-graph node classification, we show that BRL-Attention not only matches or surpasses full-attention transformers but also substantially reduces memory usage and computational cost. |
| Researcher Affiliation | Academia | 1University of Liverpool, 2Southeast University, 3National Tsing Hua University, 4University of Exeter. All listed affiliations are academic institutions. |
| Pseudocode | Yes | Algorithm 1 BRL-Former and its Constituent Functions Given x (data, Rn d), x[ct] (initialized as nn.Embeddings, Rm dct) and xhist (initialized as 0, Rnhist d). |
| Open Source Code | No | The local-attention part (the Fgen function in Alg. 1) is implemented based on Phil Wang s implementation.2 2Local-Attention github repo: https://github.com/lucidrains/local-attention. This reference points to a third-party implementation that the authors used, not their own code for the proposed BRL-Attention. |
| Open Datasets | Yes | We evaluate BRL-Former on the Long Range Arena (LRA) (Tay et al., 2020b)... We study the autoregressive language modeling on Wiki Text-103 (Merity et al., 2016)... We evaluate BRL-Former on node classification tasks using DBLP, ACM, IMDB, and Freebase datasets from the HGB benchmark. DBLP, ACM, and IMDB follow HGB (Lv et al., 2021) guidelines, while Freebase uses the split from (Mao et al., 2023). We evaluate our model on two datasets without graph structure: 20News-Groups (Pedregosa et al., 2011) and Mini-Image Net (Vinyals et al., 2016). |
| Dataset Splits | Yes | We explore three settings of (m = w {64, 128, 256}) alongside w = 512 for all local-attention-based models (e.g. Local-Attn, Longformer). The hyperparameter settings for each subtask for the original-implementation and our-implementation are delegated to Tab. 11 and Tab. 12, respectively. We evaluate BRL-Former on the Long Range Arena (LRA) (Tay et al., 2020b)... DBLP, ACM, and IMDB follow HGB (Lv et al., 2021) guidelines, while Freebase uses the split from (Mao et al., 2023). |
| Hardware Specification | Yes | As no backward is required, we can scale n up to 218 length with a single 24GB RTX-4090 GPU. |
| Software Dependencies | No | The paper mentions using 'Phil Wang s implementation' of local-attention and provides a GitHub link, but it does not specify any particular versions of programming languages, libraries, or frameworks (e.g., Python, PyTorch, CUDA versions) used for their own implementation. |
| Experiment Setup | Yes | Specifically, we evaluate Full-Attention, Performer, Linformer (k {64, 128, 256}), Local-Attention (w {50, 100}) and our BRL-Attention (w {50, 100}, m {10, 50, 100}). The experiment ran under batch size 16, with a 1-layer of encoder, 8-layer of decoder, 8 heads, and 512 hidden dims. We benchmark with sequence length n [200, 2000] with step size 100. We empirically find setting β = 0.5 performs well generally. We let γ = exp(γQγK1) exp(γQγK2) where default values for the learnable γQ/K1/K2 are torch.normal(H,mean=0,std=0.1). Finally, we select the regularizer weight λ in Eq. (4, 7) from 0.1 1.0*torch.ones(H) where we empirically find 0.5 generally works well. The hyperparameter settings for each subtask for the original-implementation and our-implementation are delegated to Tab. 11 and Tab. 12, respectively. |