Training Dynamics of In-Context Learning in Linear Attention
Authors: Yedi Zhang, Aaditya K Singh, Peter E. Latham, Andrew M Saxe
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate that the single and multiple loss drops also occur in softmax ATTNM and ATTNS, respectively." (Page 2) and "We empirically find that the different training dynamics of linear ATTNM and linear ATTNS also occur in their softmax counterparts. Figure 5 follows the same setup as Figures 1 and 3 for linear attention, with the only difference being adding the softmax activation function for the attention calculation." (Page 9). Figures 1, 3, 5, 6, 7, 8, 10, 11, 12, 13, 14 also show "Simulations". |
| Researcher Affiliation | Academia | 1Gatsby Computational Neuroscience Unit, University College London 2Sainsbury Wellcome Centre, University College London. Correspondence to: Yedi Zhang <EMAIL>. |
| Pseudocode | No | The paper describes methods using mathematical equations and prose but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code reproducing our main results is available at Git Hub: https://github.com/yedizhang/linattn-icl |
| Open Datasets | No | We consider the in-context linear regression task, where the yn in context and the target output yq are generated as a linear map of the corresponding xn and xq (Garg et al., 2022). For each sequence Xµ, we independently sample a task vector wµ from a D-dimensional standard normal distribution, wµ N(0, I), and generate yµ,n = w µ xµ,n, yµ,q = w µ xµ,q, n = 1, , N, µ = 1, , P. |
| Dataset Splits | No | We are given a training dataset {Xµ, yµ,q}P µ=1 consisting of P samples." (Page 2). Figure 14 caption also mentions "in-context learning test loss, and in-weight learning test loss". However, specific percentages or numbers for training/validation/test splits are not provided. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) in the main text. |
| Experiment Setup | No | The paper mentions qualitative aspects like 'small initialization' and 'small learning rate' (Page 4, 9), and parameters for figures (e.g., 'D = 4, N = 31, H = 8' in Figure 1). However, it lacks concrete numerical values for hyperparameters such as the specific learning rate, batch size, number of epochs, or detailed optimizer settings in the main text. |