Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning

Authors: Niloofar Yousefi, Yunwen Lei, Marius Kloft, Mansooreh Mollaghasemi, Georgios C. Anagnostopoulos

JMLR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), with which we establish sharp excess risk bounds for MTL in terms of the Local Rademacher Complexity (LRC). We also give a new bound on the LRC for any norm regularized hypothesis classes, which applies not only to MTL, but also to the standard Single-Task Learning (STL) setting. By combining both results, one can easily derive fast-rate bounds on the excess risk for many prominent MTL methods, including as we demonstrate Schatten norm, group norm, and graph regularized MTL. The derived bounds reflect a relationship akin to a conservation law of asymptotic convergence rates.
Researcher Affiliation Academia Niloofar Yousefi EMAIL Department of Electrical Engineering and Computer Science University of Central Florida Orlando, FL 32816, USA Yunwen Lei EMAIL Department of Computer Science and Engineering Southern University of Science and Technology Shenzhen, 518055, China Marius Kloft EMAIL Department of Computer Science Technische Universit at Kaiserslautern 67653 Kaiserslautern, Germany Mansooreh Mollaghasemi EMAIL Department of Industrial Engineering & Management Systems University of Central Florida Orlando, FL 32816, USA Georgios C. Anagnostopoulos EMAIL Department of Electrical and Computer Engineering Florida Institute of Technology Melbourne, FL 32901, USA
Pseudocode No The paper focuses on theoretical derivations, theorems, lemmas, and proofs for learning guarantees. It does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper discusses theoretical bounds and derivations. There is no mention of open-source code, code repositories, or supplementary materials containing code for the methodology described.
Open Datasets No The paper discusses theoretical learning guarantees for Multi-Task Learning. It refers to 'T supervised learning tasks sampled from the same input-output space X Y' but does not specify any particular dataset or provide access information.
Dataset Splits No The paper is theoretical and does not perform empirical experiments requiring dataset splits. It mentions 'T supervised learning tasks sampled from the same input-output space X Y' and 'i.i.d. samples related to each task t are described by the sequence (Xi t, Y i t )n i=1, drawn from µt' but no specific dataset or splitting methodology is provided.
Hardware Specification No The paper is entirely theoretical, focusing on mathematical bounds and proofs. Therefore, there is no mention of hardware specifications used for experiments.
Software Dependencies No The paper is theoretical and does not describe any software implementation or dependencies with version numbers.
Experiment Setup No The paper is purely theoretical, focusing on deriving learning guarantees and bounds. It does not include any experimental setup details, hyperparameters, or training configurations.