Label Noise Correction via Fuzzy Learning Machine
Authors: Jiye Liang, Yixiao Li, Junbiao Cui
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments were conducted on three commonly used datasets, including MNIST, CIFAR-10, SVHN. We perform extensive experiments on the MNIST, CIFAR10, and SVHN datasets to demonstrate the effectiveness of our method. The results of the six metrics of the noise recognition experiments of each method under four noise ratios are shown in Table 2. |
| Researcher Affiliation | Academia | Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, School of Computer and Information Technology, Shanxi University, Taiyuan, 030006, Shanxi, China EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology narratively and through equations in the 'Methodology' section. Figure 1, titled 'The overall design of FLM-LNC', visually illustrates the framework, but no explicit pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper does not contain any explicit statements about the release of open-source code, nor does it provide a link to a code repository or mention code in supplementary materials. |
| Open Datasets | Yes | The experiments were conducted on three commonly used datasets, including MNIST, CIFAR-10, SVHN. Among them, the MNIST dataset (Lecun et al. 1998) is a classic handwritten digit dataset widely used in machine learning and pattern recognition. The CIFAR-10 dataset (Krizhevsky, Hinton et al. 2009) is a well-known image dataset extensively used in the field of computer vision. The SVHN (Street View House Numbers) dataset (Netzer et al. 2011) is a publicly available large-scale dataset for digit recognition. |
| Dataset Splits | Yes | To validate the filtering performance of the proposed method, we first added completely random noise at proportions of 5%, 10%, 15%, and 20% to each training set. Subsequently, we applied the proposed filtering method as well as mainstream filtering methods to remove label noise and compared various noise filtering metrics. Finally, we trained classifiers using the denoised training sets and evaluated their generalization performance on the test sets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU specifications, or memory. |
| Software Dependencies | No | The paper mentions using a 7-layer convolutional neural network for MNIST and ResNet-18 for CIFAR-10 and SVHN, and the Adam optimizer. However, it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The fuzzy parameters were set to α = 0.2 and β = 0.8. The model was optimized using the Adam optimizer with a learning rate of 0.001. The threshold γ was set to 0.7, and the batch-size was set to 2048. Iterative training continued until there was no significant change in the loss value for 10 consecutive epochs. |