Unsupervised Learning for Class Distribution Mismatch
Authors: Pan Du, Wangbo Zhao, Xinai Lu, Nian Liu, Zhikai Li, Chaoyu Gong, Suyun Zhao, Hong Chen, Cuiping Li, Kai Wang, Yang You
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three datasets demonstrate UCDM s superiority over previous semi-supervised methods. Specifically, with a 60% mismatch proportion on Tiny-Image Net dataset, our approach, without relying on labeled data, surpasses Open Match (with 40 labels per class) by 35.1%, 63.7%, and 72.5% in classifying known, unknown, and new classes. |
| Researcher Affiliation | Academia | 1School of Information, Renmin University of China, and Engineering Research Center of Database and Business Intelligence, MOE, China 2National University of Singapore 3School of Agricultural Economics and Rural Development, Renmin University of China 4Independent Researcher 5Institute of Automation, Chinese Academy of Sciences. Correspondence to: Wangbo Zhao <EMAIL>, Suyun Zhao <EMAIL>, Pan Du <du EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Diffusion-based data generation # Sample generation stage Input: training set D, the prompt set of known classes C, diffusion model, positive instance set DP , negative instance set DN Initialize DP = , DN = for x in D do for Cy in C do Forward x to noise vectors ˆx T and x T using Eq. (1) and Eq. (6), respectively. Forward ˆx T and Cy to the diffusion model to obtain ˆx0 using Eq. (2), and add ˆx0 to DP . Forward x T to the diffusion model to obtain x0 using Eq. (8), and add x0 to DN. end for end for Algorithm 2 UCDM: Unsupervised Learning for Class Distribution Mismatch |
| Open Source Code | Yes | The code is available at https://github.com/RUC-DWBI-ML/research/tree/main/UCDM-master. |
| Open Datasets | Yes | Datasets. Following previous works (Chen et al., 2020; Li et al., 2023), we employ three benchmark datasets, including CIFAR-10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), and Tiny-Image Net (Deng et al., 2009). |
| Dataset Splits | Yes | Datasets. Following previous works (Chen et al., 2020; Li et al., 2023), we employ three benchmark datasets, including CIFAR-10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), and Tiny-Image Net (Deng et al., 2009). More details please refer to Appendix C.1. Settings. (i) We vary the mismatch proportion i.e., the percentage of unknown-class instances in training data across 0%, 20%, 40%, 60%, and 75%. Results for 0% mismatch are provided in Appendix B.1, with detailed class counts in Appendix C.1. Table 14. The counts of instances for known (kno.) and unknown (unkno.) classes in the training sets of CIFAR-10, CIFAR-100, and Tiny-Image Net datasets, with mismatch proportions ranging from 0% to 75%. Table 15. The counts of instances for known (kno.), unknown (unkno.), and new classes in the testing sets of CIFAR-10, CIFAR-100, and Tiny-Image Net datasets. |
| Hardware Specification | No | The paper mentions a "Google grant for TPU usage" in the acknowledgements, but does not provide specific details on the hardware (e.g., CPU, GPU, or TPU model/version) used to run the experiments described in the paper. |
| Software Dependencies | Yes | All experiments utilize the pretrained Stable Diffusion 2.0 model (Rombach et al., 2022) as the DPM generator, without further optimization. Table 17. Details of classifier training. config value model Wide Res Net-28-2 (Zagoruyko & Komodakis, 2016) optimizer Adam |
| Experiment Setup | Yes | Table 17. Details of classifier training. config value model Wide Res Net-28-2 (Zagoruyko & Komodakis, 2016) data augmentation random horizontal flipping and normalization batch normalization optimized over the initial 100 iterations optimizer Adam epoch 400 input size 32 32 batch size 32 learning rate 5 10 3 loss weight λ1 1 loss weight λ2(CIFAR-10) 2 loss weight λ2(CIFAR-100) 5 loss weight λ2(Tiny-Image Net) 20 interval for confidence-based labeling (in epochs) every 40 epochs confidence-based labeling round 10 |