On the Power of Convolution-Augmented Transformer
Authors: Mingchen Li, Xuechen Zhang, Yixiao Huang, Samet Oymak
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations on real datasets corroborate our findings and demonstrate that CAT and its variations indeed enhance the language modeling performance. We theoretically and empirically show that, within the CAT layer, attention and convolution exhibit strong synergy and complementarity to solve these mechanistic tasks while enjoying length generalization benefits. |
| Researcher Affiliation | Academia | 1University of Michigan 2UC Berkeley EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the model architecture and methods using textual descriptions and architectural diagrams (e.g., Figure 2, Figure 3), but it does not contain explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide links to a code repository. |
| Open Datasets | Yes | We pretrain the modified 370M-parameter model on the Slim Pajama (Soboleva et al. 2023) dataset, involving 15B tokens. We then assess the model on a variety of downstream zero-shot tasks, including Wikitext, Lambada, Piqa, Hella, Winogrande, Arc-E, and Arc-C, a methodology commonly used in the field to evaluate generalization capabilities across diverse tasks (Gu and Dao 2023; Arora et al. 2023, 2024). |
| Dataset Splits | No | The paper mentions training models on the Slim Pajama dataset and evaluating on various downstream zero-shot tasks, but it does not explicitly specify the training, validation, and test splits (e.g., percentages or counts) for these datasets or for the synthetic data generation. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, or cloud instances) used to run the experiments. |
| Software Dependencies | No | The paper mentions using the Pythia framework but does not provide specific version numbers for software dependencies such as libraries, programming languages, or other tools used in the experiments. |
| Experiment Setup | Yes | We utilize convolution kernels with a width of W = 3 and explore model embedding sizes of d = 32, 64, and 128 across MQAR and MQNAR problems to assess the impact of model dimension on performance. We adhere strictly to the parameters set by (Arora et al. 2023). More detailed information on the training setup can be found in Section A including the data generation and hyperparameters. |