Enhancing Robustness to Class-Conditional Distribution Shift in Long-Tailed Recognition
Authors: Keliang Li, Hong Chang, Shiguang Shan, Xilin CHEN
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show DRA greatly improves existing re-balancing and data augmentation methods when cooperating with them. It also alleviates the recently discovered saddle-point issue, verifying its ability to achieve enhanced robustness. We first conduct an empirical study to quantify the impact that CCD shift has on long-tailed recognition. Extensive experiments on long-tailed classification benchmarks demonstrate that DRA can cooperate with and enhance existing re-balancing and data augmentation methods. |
| Researcher Affiliation | Academia | Keliang Li EMAIL Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences Hong Chang EMAIL Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences Shiguang Shan EMAIL Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences Xilin Chen EMAIL Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences |
| Pseudocode | Yes | Algorithm 1 DRA: augmenting with a sequence of examples Algorithm 2 WRM (Sinha et al., 2017): augmenting with the last example |
| Open Source Code | No | The paper mentions using 'official code2 of (Rangwani et al., 2022a)' to estimate λmin and λmax, with a footnote providing a GitHub link for that *third-party* code. However, there is no explicit statement or link provided for the authors' *own* implementation code for the methodology described in this paper. |
| Open Datasets | Yes | We conduct experiments on CIFAR10-LT, CIFAR100-LT, Tiny-Image Net-LT (Cao et al., 2019) Celeb A-5 (Kim et al., 2020) and Image Net-LT (Liu et al., 2019). |
| Dataset Splits | Yes | We conduct experiments on CIFAR10-LT, CIFAR100-LT, Tiny-Image Net-LT (Cao et al., 2019) Celeb A-5 (Kim et al., 2020) and Image Net-LT (Liu et al., 2019). The imbalance ratio of CIFAR-LT and Tiny-Image Net-LT is set to 100. As in prior studies (Cao et al., 2019; Wei et al., 2022), we report the Top-1 accuracy on the test set for Celeb-5 and on the validation set for other benchmarks. Results of Image Net-LT are reported on three splits of classes: Many-shot (more than 100), Medium-shot (20-100) and Few-shot (less than 20). |
| Hardware Specification | Yes | All the experiments are conducted on an Nvidia RTX 2080Ti GPU. |
| Software Dependencies | No | The paper mentions using specific models (e.g., ResNet-32, ResNet-18, ResNet-50) and optimizers (SGD), but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions like Py Hessian mentioned for a third-party tool). |
| Experiment Setup | Yes | Unless specifically stated, we use the setting from (Hong et al., 2021) on CIFAR10/100-LT by default. Under this setting, we apply SGD with batch size 256 and the base learning rate 0.2 to train a Res Net-32 (He et al., 2016) model for 200 epochs, following (Hong et al., 2021). We employ the linear warm-up learning rate schedule for the first five epochs and reduce it at epochs 160 and 180 by a factor of 0.01 and use the same weight decay 0.0005 as previous works (Cao et al., 2019; Zhou et al., 2022). |