Interlocking Backpropagation: Improving depthwise model-parallelism

Authors: Aidan N. Gomez, Oscar Key, Kuba Perlin, Stephen Gou, Nick Frosst, Jeff Dean, Yarin Gal

JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We assess our strategies on both image classification Res Nets and Transformer language models, finding that our strategy consistently outperforms local learning in terms of task performance, and out-performs global learning in training efficiency.
Researcher Affiliation Collaboration Aidan N. Gomez EMAIL University of Oxford & Cohere Oscar Key EMAIL University of Oxford Kuba Perlin EMAIL Cohere Stephen Gou EMAIL Cohere Nick Frosst EMAIL Cohere JeffDean EMAIL Google Yarin Gal EMAIL University of Oxford
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. It describes methodologies in prose and uses mathematical formulas and diagrams.
Open Source Code Yes We provide a generic, open-source framework for the study of this class of optimisation algorithms. It is available at https://github.com/oscarkey/interlocking-backprop.
Open Datasets Yes We investigate the behaviour of our method when training a small convolutional network on the CIFAR-10 dataset (Krizhevsky and Hinton, 2009). ...Res Nets (He et al., 2016) on CIFAR-10, CIFAR-100, and Image Net (Deng et al., 2009). ...trained and evaluated the models with the One Billion Word Benchmark for Language Modelling (Chelba et al., 2013).
Dataset Splits Yes For CIFAR we train on the entire training set, and report results on the test set. For Image Net we train on the test set, and report results on the validation set. ...For fine-tuning we withheld 10% of the training set to create a validation set, which we used to perform the grid search.
Hardware Specification Yes Each module was trained on a v3-8 TPU.
Software Dependencies No The paper mentions optimizers like Adam and SGD, but does not provide specific version numbers for any software libraries or dependencies. For example, it mentions Adam(β1 = 0.9, β2 = 0.98, ϵ = 10 9) but no software version.
Experiment Setup Yes We use the Adam optimizer with a learning rate of 0.0001. We train for 100 epochs. We do not use learning rate decay, weight decay, or data augmentation. ...optimizer SGD initial learning rate 0.1 weight decay 0.0002 momentum 0.9 ...batch size 128 256 ...optimizer Adam(β1 = 0.9, β2 = 0.98, ϵ = 10 9) learning rate 1024 0.5 min(step 0.5, step warmup_steps 1.5) lr warmup steps 4000 total epochs 1