A New Perspective on Shampoo's Preconditioner
Authors: Depen Morwani, Itai Shapira, Nikhil Vyas, Eran Malach, Sham Kakade, Lucas Janson
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Across a variety of datasets and architectures we empirically demonstrate that this is close to the optimal Kronecker product approximation. We also study the impact of batch gradients and empirical Fisher on the quality of Hessian approximation. |
| Researcher Affiliation | Academia | Depen Morwani Kempner Institute Harvard University EMAIL Itai Shapira SEAS Harvard University EMAIL Nikhil Vyas SEAS Harvard University EMAIL Eran Malach Kempner Institute Harvard University EMAIL Sham Kakade Kempner Institute Harvard University EMAIL Lucas Janson Department of Statistics Harvard University EMAIL |
| Pseudocode | No | The paper describes iterative procedures and mathematical formulations but does not contain explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper mentions that its theoretical insights were utilized in the design of SOAP Vyas et al. (2024), a recently proposed optimizer that improves upon Adam W and Shampoo on language modeling tasks, but it does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | We conducted experiments on three datasets: MNIST (Le Cun et al., 1998), CIFAR-5M (Nakkiran et al., 2020), and Image Net (Deng et al., 2009) |
| Dataset Splits | No | The paper specifies using certain datasets and describes subsampling for MNIST, but it does not provide explicit details on training, validation, and test splits (e.g., percentages, sample counts, or references to predefined splits with specific citations). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using PyTorch in footnotes for model references (e.g., 'https://pytorch.org/vision/master/_modules/torchvision/models/resnet. html#resnet18') but does not specify version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Table 1: Summary of Experimental Configurations. λ denotes weight decay and β1 indicates momentum. Dataset Architecture Optimizer Batch Size Steps lr λ β1 MNIST Linear Classifier GD Full Batch 25 0.01 None 0 CIFAR-5M Res Net18 SGD 128 10000 .02 None .9 Image Net Conv Ne Xt-T Adam W 2048 50000 3e-3 5e-3 0.9 |