Domain-Generalizable Multiple-Domain Clustering
Authors: Amit Rozner, Barak Battash, Lior Wolf, Ofir Lindenbaum
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate empirically that our model is more accurate than baselines that require fine-tuning using samples from the target domain or some level of supervision. Our code is available at https://github.com/Amit Rozner/ domain-generalizable-multiple-domain-clustering. ... Experiments are conducted using four datasets commonly used to evaluate domain generalization methods. ... Ablation study We conducted two ablation studies to evaluate our model s performance. |
| Researcher Affiliation | Academia | Amit Rozner Faculty of Engineering, Bar Ilan University Equal contribution Barak Battash Faculty of Engineering, Bar Ilan University Equal contribution Lior Wolf School of Computer Science, Tel Aviv University Ofir Lindenbaum EMAIL Faculty of Engineering, Bar Ilan University |
| Pseudocode | No | The paper describes the method's steps and logic within the main text and through figures illustrating the architecture and process, but there are no explicitly labeled 'Pseudocode' or 'Algorithm' blocks or sections. |
| Open Source Code | Yes | Our code is available at https://github.com/Amit Rozner/ domain-generalizable-multiple-domain-clustering. |
| Open Datasets | Yes | Experiments are conducted using four datasets commonly used to evaluate domain generalization methods. ... Office31 dataset (Saenko et al., 2010) ... PACS dataset (Li et al., 2017) ... Officehome dataset (Venkateswara et al., 2017) ... Domain Net dataset (Peng et al., 2019) |
| Dataset Splits | Yes | To evaluate the capabilities of our model, we focus on the following scheme: train the model using d unlabelled source domains, then evaluate our model on the unseen and unlabelled target domain. ... The notation X, Y Z, means the model was trained on X, Y domain and tested on the Z domain. |
| Hardware Specification | Yes | Our work is implemented on an NVIDIA RTX 3080 GPU using Py Torch (Paszke et al., 2019). |
| Software Dependencies | No | Our work is implemented on an NVIDIA RTX 3080 GPU using Py Torch (Paszke et al., 2019). |
| Experiment Setup | Yes | Our work is implemented on an NVIDIA RTX 3080 GPU using Py Torch (Paszke et al., 2019). We use blank Resnet18 (He et al., 2016) as our feature extractor to align with the baseline (Menapace et al., 2020; Harary et al., 2022; Wang et al., 2022) The models in the first phase were trained using SGD with momentum 0.9 and weight decay 1e 4. We use a batch size of 8 and train the model for 500 epochs. To train the clustering head, we use the same optimizer with batches of size 256 for 100 epochs for Office31 and Officehome datasets and 50 epochs for the PACS dataset. This difference is due to the small number of classes in the PACS dataset, which enables the model to converge much faster. To create style transfer augmentations, we use a pre-trained Ada IN model (Huang & Belongie, 2017). The most diversified head selection mechanism initiates at epoch 30 and is repeated every n = 10 epochs. |