Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Enhance the Visual Representation via Discrete Adversarial Training
Authors: Xiaofeng Mao, YueFeng Chen, Ranjie Duan, Yao Zhu, Gege Qi, shaokai ye, Xiaodan Li, Rong Zhang, Hui Xue'
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experiment Discrete Adversarial Training (DAT) on multiple tasks including image classification, object detection and self-supervised learning. |
| Researcher Affiliation | Collaboration | Alibaba Group, Zhejiang University, EPFL EMAIL |
| Pseudocode | Yes | Algorithm 1: Pseudo code of DAT |
| Open Source Code | Yes | The code will be available at https://github.com/alibaba/easyrobust. |
| Open Datasets | Yes | We adopt Image Net-1K for both training and indistribution testing. |
| Dataset Splits | Yes | We study this effect by sampling 1000 mini-batches in Image Net validation set |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for experiments, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | We implement DAT with vanilla training recipes using "robustness" library. |
| Experiment Setup | Yes | We set = 0.1 by default in DAT. |