Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Learning from Label Proportions with Generative Adversarial Networks

Authors: Jiabin Liu, Bo Wang, Zhiquan Qi, YingJie Tian, Yong Shi

NeurIPS 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Several experiments on benchmark datasets demonstrate vivid advantages of the proposed approach.
Researcher Affiliation Collaboration Jiabin Liu Samsung Research China Beijing Beijing 100028, China EMAIL Bo Wang University of International Business and Economics Beijing 100029, China EMAIL Zhiquan Qi Yingjie Tian Yong Shi University of Chinese Academy of Sciences Beijing 100190, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1: LLP-GAN Training Algorithm
Open Source Code Yes Code is available at https://github.com/liujiabin008/LLP-GAN.
Open Datasets Yes Four benchmark datasets, MNIST, SVHN, CIFAR-10, and CIFAR-100 are investigated in our experiments.
Dataset Splits Yes In the experimental setting, the training data is equally divided into five minibatches, with 10,000 images in each one, and the test data with exactly 1,000 images in every category.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes To keep up the same settings in previous work, bag size is fixed as 16, 32, 64, and 128.