Twofold Debiasing Enhances Fine-Grained Learning with Coarse Labels
Authors: Xin-yang Zhao, Jian Jin, Yang-yang Li, Yazhou Yao
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on five benchmark datasets demonstrate the efficacy of our approach, achieving state-of-the-art results that surpass competitive methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu, China EMAIL |
| Pseudocode | No | The paper describes the methodology in narrative text and does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/Faithzh/TFB |
| Open Datasets | Yes | Our experiments were performed on BREEDS (Santurkar, Tsipras, and Madry 2020) and CIFAR-100 (Krizhevsky 2009) datasets. |
| Dataset Splits | Yes | Following common experimental setups (Tian et al. 2020), we present results for both 5-way 1-shot and all-way 1-shot configurations, with 15 queries per test episode. The evaluation is conducted on 1000 random episodes, and we report the mean accuracy along with a 95% confidence interval. |
| Hardware Specification | Yes | The model is trained using the SGD optimizer on 4 Ge Force RTX 3090 GPUs for 200 epochs. |
| Software Dependencies | No | The paper mentions backbone networks like Res Net-12 and Res Net-50 and the SGD optimizer, but does not provide specific version numbers for software dependencies like PyTorch, TensorFlow, or Python. |
| Experiment Setup | Yes | For CIFAR-100 and BREEDS, the batch sizes are 512 and 256 respectively; the initial learning rates are 0.12 and 0.03; the hyperparameters α are set at 1 and 10 respectively. The learning rates decrease by tenfold at the 140th and 180th epochs. We implement random data augmentation techniques including random resized crop, random horizontal flipping, and random color jitter during training. Other hyperparameter settings remain the same as those used in ANCOR. |