Sample Complexity for Distributionally Robust Learning under chi-square divergence

Authors: Zhengyu Zhou, Weiwei Liu

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper investigates the sample complexity of learning a distributionally robust predictor under a particular distributional shift based on χ2-divergence... We demonstrate that any hypothesis class H with finite VC dimension is distributionally robustly learnable. Moreover, we show that when the perturbation size is smaller than a constant, finite VC dimension is also necessary for distributionally robust learning by deriving a lower bound of sample complexity in terms of VC dimension. Our main contributions are as below: We show that under χ2-divergence regime, a hypothesis class H with finite VC dimension can be distributionally robustly PAC-learnable with DRERM. Under χ2-divergence, we prove that, when the perturbation size ρ is smaller than a constant, finite VC dimension is necessary for distributionally robust learning. We further show that without a sufficient amount of samples (depending on the VC dimension of H), any hypothesis class H is not distributionally robustly PAC-learnable.
Researcher Affiliation Academia Zhengyu Zhou EMAIL School of Computer Science National Engineering Research Center for Multimedia Software Institute of Artificial Intelligence Hubei Key Laboratory of Multimedia and Network Communication Engineering Wuhan University Wuhan, China
Pseudocode No The paper describes theoretical proofs and derivations in text and mathematical notation, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statements regarding the release of source code, nor does it provide links to code repositories or mention code in supplementary materials.
Open Datasets No This paper is theoretical in nature and does not conduct experiments that would involve the use of datasets. Therefore, it does not provide information about public datasets.
Dataset Splits No As a theoretical paper, it does not describe experimental setups involving dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any experimental implementation details, thus no hardware specifications are mentioned.
Software Dependencies No This theoretical paper does not detail any experimental implementation, and therefore no specific software dependencies or versions are listed.
Experiment Setup No The paper is focused on theoretical derivations and proofs, and as such, it does not include details on experimental setup or hyperparameter values.