Robust Logit Adjustment for Learning with Long-Tailed Noisy Data
Authors: MingCai Chen, Yuntao Du, Wenyu Jiang, Baoming Zhang, Shuai Feng, Yi Xin, Chongjun Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both synthetic and realworld long-tailed noisy datasets demonstrate the superior performance of our method. We conducted extensive experiments with various noise and imbalance rates. Our method demonstrated significant improvements, achieving up to a 13% accuracy improvement on the noisy long-tailed CIFAR dataset and up to a 1.6% accuracy improvement on real-world noisy datasets with class imbalance, namely long-tailed Animal-10N and Food-101N. We also conduct systematic ablation analysis that leads to an improved understanding of our approach. |
| Researcher Affiliation | Academia | 1Nanjing University of Posts and Telecommunications 2The State Key Laboratory of Tibetan Intelligence 3State Key Laboratory of General Artificial Intelligence, BIGA 4State Key Laboratory for Novel Software Technology at Nanjing University, Nanjing University EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the method in detail using prose and mathematical equations but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The text states: "We refer to our source code for detailed implementation." This statement is ambiguous and does not constitute a clear, affirmative release of the code to the public, nor does it provide a specific link or indicate its inclusion in supplementary materials. |
| Open Datasets | Yes | Our experiments are conducted on two types of datasets, one is obtained by corrupting the labels and create class imbalance on correctly labeled dataset CIFAR (Krizhevsky, Hinton et al. 2009). Another is obtained by creating class imbalance on the real-world noisy datasets, namely Animal-10N (Song, Kim, and Lee 2019) and Food-101N (Lee et al. 2018). |
| Dataset Splits | Yes | To construct the imbalanced noisy dataset, we follow (Yi et al. 2022; Karthik, Revaud, and Chidlovskii 2021; Cao et al. 2021) and decide the number of samples per class according to the exponential function: Nc = Nmax * (1 - η)^((c-1)/(C-1)) , c ∈ {1, ..., C}, where η is the imbalance ratio, Nmax is the number of the samples from the majority class. [...] It is important to note that the test datasets are class-balanced (For example, there are 10,000 test images evenly distributed across the 10 classes on CIFAR-10). |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions various techniques and frameworks (e.g., Dropout, Mixup, Rand Augment) but does not provide specific version numbers for any software libraries, programming languages, or solvers used in the implementation. |
| Experiment Setup | No | Additional training details are provided in the supplementary material. |