Auto-Lambda: Disentangling Dynamic Task Relationships

Authors: Shikun Liu, Stephen James, Andrew Davison, Edward Johns

TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate Auto-λ in both multi-task learning and auxiliary learning settings within both computer vision and robotics domains. We show that Auto-λ outperforms not only all multi-task and auxiliary learning optimisation strategies, but also the optimal (but static) task groupings we found in the selected datasets.
Researcher Affiliation Academia Shikun Liu EMAIL Dyson Robotics Lab, Imperial College London Stephen James EMAIL University of California, Berkeley Andrew J. Davison EMAIL Dyson Robotics Lab, Imperial College London Edward Johns EMAIL Robot Learning Lab, Imperial College London
Pseudocode No The paper describes the Auto-λ framework and its optimization strategy using mathematical formulations (e.g., Eq. 3, 4, 5, 6) but does not include structured pseudocode or an algorithm block.
Open Source Code Yes Code is available at https://github.com/lorenmt/auto-lambda.
Open Datasets Yes First, we evaluated Auto-λ with dense prediction tasks in NYUv2 (Nathan Silberman & Fergus, 2012) and City Scapes (Cordts et al., 2016)... We trained on CIFAR100 (Krizhevsky, 2009)... We selected 10 tasks... from the robot learning environment, RLBench (James et al., 2020)... Celeb A dataset (Liu et al., 2015)
Dataset Splits No The paper mentions using specific datasets (NYUv2, CityScapes, CIFAR-100, RLBench, Celeb A) and refers to prior work for experimental settings, but it does not explicitly provide the training/test/validation dataset splits (e.g., percentages or sample counts) within its own text.
Hardware Specification No The paper mentions 'GPU memory consumption' when discussing stochastic task sampling, but it does not provide specific details about the GPU models, CPU, or any other hardware used for experiments.
Software Dependencies No The paper mentions optimizers like 'SGD momentum' and 'Adam' in Appendix A, but it does not provide specific version numbers for programming languages, machine learning frameworks (e.g., PyTorch, TensorFlow), or other software libraries used in the experiments.
Experiment Setup Yes For dense prediction tasks... trained Auto-λ with learning rate 10 4 and 3 10 5 for NYUv2 and City Scapes respectively. For multi-domain classification tasks, we trained each and all tasks with SGD momentum with 0.1 initial learning rate, 0.9 momentum, and 5 10 4 weight decay. We applied cosine annealing for learning rate decay trained with total 200 epochs. We set batch size 32 and we trained Auto-λ with 3 10 4 learning rate. For robot manipulation tasks, we trained with Adam with a constant learning rate 10 3 for 8000 iterations. We set batch size 32 and we trained Auto-λ with 3 10 5 learning rate.