LibMTL: A Python Library for Deep Multi-Task Learning

Authors: Baijiong Lin, Yu Zhang

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Table 1, we compare different MTL methods on the NYUv2 data set (Silberman et al., 2012) and set up a benchmark for MTL. The NYUv2 data set is an indoor scene understanding data set and has been used extensively in the MTL literature. It contains 3 tasks: semantic segmentation (denoted by Segmentation), depth estimation (denoted by Depth), and surface normal prediction (denoted by Normal). The implementation details and evaluation metrics are following Lin et al. (2022).
Researcher Affiliation Academia Baijiong Lin EMAIL The Hong Kong University of Science and Technology (Guangzhou) Yu Zhang EMAIL Department of Computer Science and Engineering, Southern University of Science and Technology Peng Cheng Laboratory
Pseudocode No The paper describes methods and a library architecture but does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The source code and detailed documentations of Lib MTL are available at https://github.com/median-research-group/Lib MTL and https://libmtl.readthedocs.io, respectively.
Open Datasets Yes In Table 1, we compare different MTL methods on the NYUv2 data set (Silberman et al., 2012) and set up a benchmark for MTL.
Dataset Splits No The paper mentions evaluating on the NYUv2 dataset and that 'Each experiment is repeated over 3 random seeds' but does not provide specific details on training, validation, or test splits. It states 'The implementation details and evaluation metrics are following Lin et al. (2022)' but does not explicitly describe the splits in this paper.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper states Lib MTL is 'built on Py Torch' and that it's a 'Python library', but it does not specify version numbers for PyTorch, Python, or any other software dependencies.
Experiment Setup No The paper mentions that the Lib MTL.config module handles configuration parameters like hyper-parameters (batch size, running epoch, random seed, learning rate) and states that 'The implementation details and evaluation metrics are following Lin et al. (2022)'. However, it does not explicitly provide the specific values for these hyperparameters or training configurations within the main text of this paper.