Unified Risk Analysis for Weakly Supervised Learning

Authors: Chao-Kai Chiang, Masashi Sugiyama

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we introduce a framework providing a comprehensive understanding and a unified methodology for WSL. The formulation component of the framework, leveraging a contamination perspective, provides a unified interpretation of how weak supervision is formed and subsumes fifteen existing WSL settings. The analysis component of the framework, viewed as a decontamination process, provides a systematic method of conducting risk rewrite. In addition to the conventional inverse matrix approach, we devise a novel strategy called marginal chain aiming to decontaminate distributions. We justify the feasibility of the proposed framework by recovering existing rewrites reported in the literature.
Researcher Affiliation Academia Chao-Kai Chiang EMAIL Masashi Sugiyama EMAIL Department of Complexity Science and Engineering Graduate School of Frontier Sciences The University of Tokyo 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8561, Japan
Pseudocode No The paper describes methods and proofs using mathematical formulations and definitions (e.g., Proposition 1, Proposition 2, Theorem 21) but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide any explicit statement about releasing code, nor does it include links to a code repository. It mentions being 'Reviewed on Open Review: https: // openreview. net/ forum? id= RGsd Aw Wuu6' which is a review platform, not a code repository.
Open Datasets No The paper focuses on a theoretical framework for weakly supervised learning and risk rewrite. It discusses various WSL scenarios and their formulations but does not present any empirical evaluation on specific datasets, nor does it provide access information (links, DOIs, citations with authors/years) for any publicly available or open datasets.
Dataset Splits No The paper is theoretical in nature, proposing a unified framework for weakly supervised learning. It does not conduct experiments on datasets, thus no dataset splits (training/validation/test) are mentioned or specified.
Hardware Specification No The paper describes a theoretical framework for weakly supervised learning and risk rewrite. It does not present any experimental results or computational performance benchmarks, and therefore does not mention any specific hardware used.
Software Dependencies No The paper presents a theoretical framework and mathematical derivations for weakly supervised learning. It does not mention any specific software dependencies or their version numbers required for implementation or reproduction.
Experiment Setup No The paper focuses on a theoretical framework and mathematical analysis for weakly supervised learning. It does not describe any experimental setups, hyperparameters, training configurations, or system-level settings.