Beyond Boundaries: A Novel Data-Augmentation Discourse for Open Domain Generalization
Authors: Shirsha Bose, Ankit Jha, Hitesh Kandala, Biplab Banerjee
TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on six benchmark datasets convincingly demonstrate that ODG-NET surpasses the state-of-the-art by an impressive margin of 1 4% in both open and closed-set DG scenarios. |
| Researcher Affiliation | Academia | Shirsha Bose EMAIL Technical University of Munich Ankit Jha EMAIL Indian Institute of Technology Bombay Hitesh Kandala EMAIL Indian Institute of Technology Bombay Biplab Banerjee EMAIL Indian Institute of Technology Bombay |
| Pseudocode | Yes | Algorithm 1 ODG-NET training algorithm |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository. It only discusses the methodology. |
| Open Datasets | Yes | We present our results on six widely used benchmark datasets for DG. Specifically, we follow the approach of Shu et al. (2021) and use the following datasets: (1) Office-Home Venkateswara et al. (2017), (2) PACS Li et al. (2017), (3) Multi-Dataset Shu et al. (2021). In addition, we introduce the experimental setup of ODG for two additional DG datasets, namely VLCS Fang et al. (2013) and Digits-DG Zhou et al. (2020b) in this paper. For our closed-set DG experiment, we also utilize the large-scale Domain Net Peng et al. (2019). |
| Dataset Splits | Yes | We follow a cross-validation approach to estimate the loss weights, holding out 10% of samples per domain and using held-out pseudo-open-set validation samples obtained through cumix Mancini et al. (2020b), that the model has not seen to select the best-performing model. |
| Hardware Specification | No | The paper mentions that ODG-NET comprises 48 million parameters for S = 3, and the training stage requires 65 GFLOPS, but does not provide specific hardware details like GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer (Kingma & Ba, 2014) but does not provide specific version numbers for any software, libraries, or frameworks used in the implementation. |
| Experiment Setup | Yes | We employ a standardized training protocol across all datasets. During each training iteration, we first optimize Eq. 8 using the Adam optimizer Kingma & Ba (2014), with a learning rate of 2e 4 and betas of (0.5, 0.99). We then minimize Eq. 9 using Adam with a learning rate of 2e 2 and betas of (0.9, 0.99). Our batch size is typically set to 64, and we train for 30 epochs, except for Domain Net, where we use a batch size of 128 and train for 40 epochs. |