Learning Multi-Level Features with Matryoshka Sparse Autoencoders

Authors: Bart Bussmann, Noa Nabeshima, Adam Karvonen, Neel Nanda

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on both synthetic and real-world datasets, we demonstrate that Matryoshka SAEs mitigate feature absorption across a wide range of model sizes and sparsity levels. We find that Matryoshka SAEs have more disentangled latent representations (as measured by maximum cosine similarity of decoder vectors) and also improve performance on probing and targeted concept erasure tasks compared to standard SAE baselines.
Researcher Affiliation Academia The paper does not provide clear institutional affiliations (university names, company names, or email domains) for the authors. Only a personal email address is listed for the first author.
Pseudocode Yes Listing 1. Example code implementation of Matryoshka SAE
Open Source Code Yes For the code implementation used in our experiments see https://github.com/saprmarks/dictionary_learning/blob/main/dictionary_learning/trainers/matryoshka_batch_top_k.py.
Open Datasets Yes To systematically study how features evolve as SAE dictionary size increases, we train a family of small reference SAEs with varying dictionary sizes (30, 100, 300, 1k, 3k, and 10k latents) on three model locations: attention block outputs, MLP outputs, and the residual stream. ... We also use Tiny Stories (Eldan & Li, 2023) ... We use the Adam optimizer with a learning rate of 3  10-4 and batch size of 2048. No additional regularization terms are used beyond the implicit regularization from the multiple reconstruction objectives. ... The SAEs were trained on the residual stream activations from layer 12 of Gemma 2-2B using 500M tokens sampled from The Pile (Gao et al., 2020).
Dataset Splits No The paper mentions training on 500M tokens from The Pile and 200M tokens for ablation studies but does not explicitly provide specific training/test/validation dataset splits (percentages, counts, or predefined split references) for their experiments.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions that models are implemented in PyTorch and uses the Adam optimizer, but does not provide specific version numbers for these or any other key software components.
Experiment Setup Yes We train five Matryoshka SAEs with an average sparsity of respectively 20, 40, 80, 160, and 320 active latents per token. The SAEs were trained on the residual stream activations from layer 12 of Gemma 2-2B using 500M tokens sampled from The Pile (Gao et al., 2020). We use the Adam optimizer with a learning rate of 3  10-4 and batch size of 2048. No additional regularization terms are used beyond the implicit regularization from the multiple reconstruction objectives.