Distribution-Specific Agnostic Conditional Classification With Halfspaces

Authors: Jizhou Huang, Brendan Juba

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We give both positive and negative results for Gaussian feature distributions. On the positive side, we present the first PAC-learning algorithm for homogeneous halfspace selectors with error guarantee O( opt), where opt is the optimal conditional 0-1 loss over the given class of classifiers and homogeneous halfspaces. On the negative side, we find that, under cryptographic assumptions, approximating the conditional loss within a small additive error is computationally hard even under Gaussian distribution. We prove that approximating conditional classification is at least as hard as approximating agnostic classification in both additive and multiplicative form.
Researcher Affiliation Academia Jizhou Huang, Brendan Juba Washington University in St. Louis EMAIL
Pseudocode Yes Algorithm 1: Conditional Classification For Finite C Algorithm 2: Projected SGD for LD(w) Algorithm 3: Conditional Classification For Sparse Linear C
Open Source Code No The paper does not contain any explicit statements about releasing source code or providing links to code repositories.
Open Datasets No The paper focuses on theoretical analysis, specifically for "Gaussian feature distributions" and "standard normal x-marginals", which are mathematical constructs rather than specific empirical datasets. No real-world or benchmark datasets are mentioned as being used or made available.
Dataset Splits No The paper is theoretical and does not conduct experiments on specific datasets, therefore, there is no mention of dataset splits like training/test/validation sets.
Hardware Specification No The paper is theoretical and describes algorithms and hardness results. It does not mention any hardware used for running experiments.
Software Dependencies No The paper focuses on theoretical algorithms and their guarantees. It does not specify any software names with version numbers that would be used for implementation.
Experiment Setup No The paper is theoretical, presenting algorithms and hardness results, rather than empirical experiments. Therefore, no experimental setup details like hyperparameter values or training configurations are provided.