Demystifying the Token Dynamics of Deep Selective State Space Models
Authors: Thieu Vo, Duy-Tung Pham, Xin Tong, Tan Nguyen
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results validate these refinements, offering insights into enhancing Mamba s effectiveness in real-world applications. The code is publicity available at https://github.com/Fsoft-AIC/Mamba-token-dynamic. We empirically demonstrate the benefits of our token rendering method in improving the model s accuracy and convergence speed compared to the baseline Mamba on the large-scale Image Net classification task (Deng et al., 2009) |
| Researcher Affiliation | Collaboration | Thieu N. Vo Department of Mathematics National University of Singapore EMAIL Duy-Tung Pham FPT Software AI Center EMAIL Xin T. Tong Department of Mathematics National University of Singapore EMAIL Tan M. Nguyen Department of Mathematics National University of Singapore EMAIL |
| Pseudocode | Yes | Algorithm 1 Forwarding through a SSM layer with reordering |
| Open Source Code | Yes | The code is publicity available at https://github.com/Fsoft-AIC/Mamba-token-dynamic. Source code is provided at https://github.com/Fsoft-AIC/ Mamba-token-dynamic. |
| Open Datasets | Yes | We conduct evaluations on the WIKITEXT103 language modeling task (Merity et al., 2017). We benchmark on the image classification task using the Image Net-1K dataset (Deng et al., 2009). |
| Dataset Splits | Yes | We utilize the WIKITEXT103 (Merity et al., 2017), created from Wikipedia articles. It includes a training set of approximately 28,000 articles, amounting to 103 million words in total. Each article is split into sections of about 3,600 words. The validation and test sets contain 60 articles each, with word counts of 218,000 and 246,000 respectively, combining for a total of roughly 268,000 words. The Image Net-1K dataset (Deng et al., 2009) ... It contains a collection of 1.28 million labeled training images and 50,000 validation images. |
| Hardware Specification | Yes | All experiments are conducted using a server with four A100 GPUs. |
| Software Dependencies | No | The paper mentions using 'Adam W optimizer' and 'GPT-2 tokenizer' but does not provide specific version numbers for these or any other software libraries (e.g., PyTorch, TensorFlow, Python, CUDA). |
| Experiment Setup | Yes | Table 4: Hyperparameters configuration for Language Modeling task on WIKITEXT103. Sequence Length 1024 Peak Learning Rate 0.0015 Momentum β1 = 0.9; β2 = 0.999 Weight Decay 0.25 Batch size 128 Learning rate warmup Linear Learning rate scheduler Cosine decay Dropout 0.25 Table 5: Hyperparameters configuration for Image Classification task on Image Net-1K. Optimizer Adam W Peak Learning Rate 0.0008 Momentum β1 = 0.9; β2 = 0.999 Weight Decay 0.05 Batch size 512 Learning rate warmup Linear Learning rate scheduler Cosine decay Dropout 0.0 |