Positive-unlabeled AUC Maximization under Covariate Shift

Authors: Atsutoshi Kumagai, Tomoharu Iwata, Hiroshi Takahashi, Taishi Nishiyama, Kazuki Adachi, Yasuhiro Fujiwara

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally show the effectiveness of our method with six real-world datasets. Section 5 "Experiments" details extensive experimental validation with real-world datasets, performance metrics (AUC), and comparisons to existing methods.
Researcher Affiliation Collaboration 1NTT Corporation, Japan 2Yokohama National University, Japan. Correspondence to: Atsutoshi Kumagai <EMAIL>.
Pseudocode Yes Algorithm 1 Training procedure of the proposed method
Open Source Code No The paper does not provide a specific link to a code repository or an explicit statement about releasing their source code for the described methodology.
Open Datasets Yes We used four real-world datasets in the main paper: MNIST (Le Cun et al., 1998), Fashion MNIST (Xiao et al., 2017), SVHN (Netzer et al., 2011), and CIFAR10 (Krizhevsky et al., 2009). ... In Appendix D.3, we also used real-world tabular datasets: epsilon and Hreadmission (Gardner et al., 2023).
Dataset Splits Yes For training in each dataset, we used 50 positive and 3,000 unlabeled data in the training distribution and 3,000 unlabeled data in the test distribution. Additionally, we used 20 positive and 250 unlabeled data in the training distribution and 250 unlabeled data in the test distribution for validation. ... We used 3,000 data in the test distribution as test data for evaluation. Training, validation, and test datasets did not overlap.
Hardware Specification Yes All methods were implemented using Pytorch (Paszke et al., 2017), and all experiments were conducted on a Linux server with an Intel Xeon CPU and A100 GPU.
Software Dependencies No All methods were implemented using Pytorch (Paszke et al., 2017)... For all methods, we used the Adam optimizer (Kingma & Ba, 2014). The paper mentions PyTorch and Adam optimizer but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes For the proposed method and CPU, relative parameter α was set to 0.5 for all datasets. ... The mini-batch size M was set to 512 and the positive mini-batch size P was set to 50. For all methods, we used the Adam optimizer (Kingma & Ba, 2014). We set the learning rate to 10-4. The maximum number of epochs was 200.