Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Symmetry Teleportation for Accelerated Optimization

Authors: Bo Zhao, Nima Dehmamy, Robin Walters, Rose Yu

NeurIPS 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we show that teleportation improves the convergence speed of gradient descent and Ada Grad for several optimization problems including test functions, multi-layer regressions, and MNIST classification.
Researcher Affiliation Collaboration Bo Zhao University of California, San Diego EMAIL Nima Dehmamy IBM Research EMAIL Robin Walters Northeastern University EMAIL Rose Yu University of California, San Diego EMAIL
Pseudocode Yes Algorithm 1: Symmetry Teleportation
Open Source Code Yes Our code is available at https://github.com/Rose-STL-Lab/Symmetry-Teleportation.
Open Datasets Yes MNIST classification. We apply symmetry teleportation on the MNIST classification task (Deng, 2012).
Dataset Splits Yes We split the training set into 48,000 for training and 12,000 for validation.
Hardware Specification No The paper does not explicitly describe the specific hardware used (e.g., GPU/CPU models, memory) to run the experiments in the main text.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup Yes Rosenbrock function: "Each algorithm is run 1000 steps with learning rate 10 3. We teleport the parameters every 100 steps." MNIST classification: "Learning rate is 2 10 3, and learning rate for teleportation is 10 3. Each optimization algorithm is run 80 epochs with batch size of 20."