Meta Label Correction with Generalization Regularizer
Authors: Tao Tong, Yujie Mo, Yucheng Xie, Songyue Cai, Xiaoshuang Shi, Xiaofeng Zhu
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on real datasets verify the effectiveness of our proposed method in terms of different classification tasks. ... In this section, we conduct experiments on two synthetic datasets and one real-world dataset with different noisy ratios, compared our proposed MLCGR with seven comparison methods, with respect to image classification task in terms of accuracy (ACC). |
| Researcher Affiliation | Academia | Tao Tong , Yujie Mo , Yuchen Xie , Songyue Cai , Xiaoshuang Shi , Xiaofeng Zhu School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China EMAIL, EMAIL |
| Pseudocode | Yes | We list the pseudocode of the proposed MLCGR in Algorithm 1. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for the described methodology or a link to a code repository. It only mentions obtaining code for comparison methods. |
| Open Datasets | Yes | We conduct experiments on two synthetic datasets (i.e., CIFAR10 and CIFAR100 [Krizhevsky et al., 2009]) and one real-world dataset (i.e., Clothing1M [Xiao et al., 2015]). |
| Dataset Splits | No | The paper mentions corrupting original labels for CIFAR10 and CIFAR100 and using batches, but does not explicitly provide specific percentages, sample counts, or detailed methodology for training/test/validation splits. It references a previous work for label corruption but not for the dataset split strategy. |
| Hardware Specification | Yes | We conduct all experiments on a computer with an Intel CPU Core(TM) i9-12900K @3.2 GHz 16-core and a NVIDIA Ge Force RTX3090 to implement all methods including the proposed MLCGR with Py Torch framework. |
| Software Dependencies | No | The paper mentions using the Py Torch framework but does not specify a version number for it or any other software dependencies. |
| Experiment Setup | Yes | In the proposed MLCGR, we apply Re LU as the activation function and SGD as the optimizer with a momentum of 0.9. We also let the learning rate as 10 2 and then gradually decay as 10 5. For noisy filtering, we set the parameters λs and T as 0.3 and 0.1, respectively. For meta label correction, we set the label smooth parameter λy as 0.2, the noisy label parameter λny as 0.1 and the generalization regularizer parameter γ as 0.1. The proposed MLCGR employs Res Net32 as the backbone to extract sample representation for the symmetric noisy and the Res Net-28-10 [Zagoruyko, 2016] for asymmetric noisy. For the real-world dataset Clothing1M, we follow the setting of previous work [Tanaka et al., 2018], and use Res Net-50 a pre-trained model on Image Net as the backbone. Moreover, we set the number of the warm-up epoch for CIFAR10, CIFAR100 and Clothing1M as 10, 20, and 50, respectively. |