Chimera: State Space Models Beyond Sequences
Authors: Aakash Lahoti, Tanya Marwah, Ratish Puduppully, Albert Gu
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate the versatility of our approach Chimera achieves strong performance across the domains of language, vision, and graphs, outperforming BERT on GLUE by 0.7 points, Vi T on Image Net-1k by 2.6%, and all the baselines on the Long Range Graph Benchmark. |
| Researcher Affiliation | Collaboration | Aakash Lahoti EMAIL Carnegie Mellon University Tanya Marwah EMAIL Carnegie Mellon University Ratish Puduppully EMAIL IT University of Copenhagen Albert Gu EMAIL Carnegie Mellon University, Cartesia AI |
| Pseudocode | No | No explicit pseudocode or algorithm blocks are provided in the paper. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating the release of open-source code for the methodology described. |
| Open Datasets | Yes | On language, it outperforms BERT on the GLUE benchmark (Wang et al., 2019)... On images, it surpasses Vi T models on the Image Net-1k classification (Deng et al., 2009)... On general graphs, Chimera outperforms strong baselines on the Long Range Graph Benchmark (Dwivedi et al., 2021)... Both methods are trained on the Masked Language Modeling (MLM) (Devlin et al., 2019) task on the C4 dataset (Raffel et al., 2020) for 70k steps... |
| Dataset Splits | Yes | On language, it outperforms BERT on the GLUE benchmark (Wang et al., 2019)... On images, it surpasses Vi T models on the Image Net-1k classification (Deng et al., 2009)... On general graphs, Chimera outperforms strong baselines on the Long Range Graph Benchmark (Dwivedi et al., 2021)... Both methods are trained on the Masked Language Modeling (MLM) (Devlin et al., 2019) task on the C4 dataset (Raffel et al., 2020) for 70k steps, following the recipe used in M2 (Fu et al., 2023). The models are then fine-tuned on the GLUE benchmark. |
| Hardware Specification | No | Finally, we note that on modern hardware accelerators such as GPUs and TPUs, various computational algorithms can have different efficiency tradeoffs. |
| Software Dependencies | No | No specific software dependencies with version numbers are explicitly provided in the paper. |
| Experiment Setup | Yes | In Table 8, we provide the architectural and training details for BERT-B and Chimera on the MLM task. For both the models, we follow the M2 recipe from Fu et al. (2023)... In Table 9, we present the reduced setting used for our ablation studies in Tables 6 and 3... The hyperparameters used to train Chimera are provided in Table 11. |