Fast Multi-Instance Partial-Label Learning

Authors: Yin-Fang Yang, Wei Tang, Min-Ling Zhang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that the performance of FASTMIPL is highly competitive to state-of-the-art methods, while significantly reducing computational time in benchmark and the real-world datasets.
Researcher Affiliation Academia Yin-Fang Yang1,2, Wei Tang1,2*, Min-Ling Zhang1,2 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China EMAIL, EMAIL, EMAIL
Pseudocode Yes The FASTMIPL pseudocode of the optimization procedure summarizes in the Appendix
Open Source Code Yes FASTMIPL s code and appendix have been made publicly available on Github: https://github.com/yangyf22/Fast MIPL
Open Datasets Yes Table 1 provides an overview of the characteristics of all datasets. There are eight types of characteristics mentioned. The symbol #bag denotes the count of multi-instance bags... Datasets: MNIST-MIPL (MNIST), FMNIST-MIPL (FMNIST), Birdsong-MIPL (Birdsong), SIVAL-MIPL (SIVAL), CRC-MIPL-Row (C-Row), CRC-MIPL-SBN (C-SBN), CRC-MIPL-KMeans Seg (C-KMeans), CRC-MIPL-SIFT (C-SIFT)
Dataset Splits Yes The data partition follows the strategies of DEMIPL and ELIMIPL, dividing the data into training and testing sets with a ratio of 7:3. The average and the standard deviation of accuracy are recorded by conducting the experiments with random train/test splits ten times
Hardware Specification Yes FASTMIPL is implemented using PyTorch and trained on a single NVIDIA GeForce RTX 4090 GPU. All experiments are performed on a machine with an Intel Core i7-13700K CPU, 64 GB main memory, and a single NVIDIA GeForce RTX 4090 GPU.
Software Dependencies No FASTMIPL is implemented using PyTorch and trained on a single NVIDIA Ge Force RTX 4090 GPU.
Experiment Setup Yes The optimization process employs stochastic gradient descent (SGD) with a momentum of 0.9 and a weight decay of 0.0001... The learning rate is selected from the predefined set {0.0005, 0.001, 0.002, 0.005}, the training batch size equals to the count of bags in the training set, and the value of posterior samples to approximate the expectation is chosen from the set {10, 20, 30, 40, 50}. The number of epochs is set to 200 for the MNIST-MIPL and FMNIST-MIPL datasets and 500 for the remaining three datasets.