Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Domain-invariant Feature Exploration for Domain Generalization

Authors: Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, Xing Xie

TMLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate our method in both visual and time-series domains including image classification, sensor-based human activity recognition, EMG recognition, and single-domain generalization. The training data are randomly split into two parts: 80% for training and 20% for validation. [...] The results on three image benchmarks are shown in Table 1, 2, and 3, respectively. [...] The results are shown in Table 4, 5, and 6 for three settings, respectively. Our method achieves the best average performance compared to the other state-of-the-art methods in all three settings. [...] Ablation Study We perform ablation study in this section.
Researcher Affiliation Collaboration Wang Lu EMAIL Institute of Computing Technology, Chinese Academy of Sciences Jindong Wang EMAIL Microsoft Research Asia Haoliang Li EMAIL City University of Hong Kong Yiqiang Chen EMAIL Institute of Computing Technology, Chinese Academy of Sciences. Peng Cheng Laboratory Xing Xie EMAIL Microsoft Research Asia
Pseudocode No The paper describes the methodology using mathematical formulations (e.g., equations 1-7) and architectural diagrams (Figure 2, Figure 3) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Corresponding author. Code: https://github.com/jindongwang/transferlearning/tree/master/code/Deep DG.
Open Datasets Yes Digits DG (Zhou et al., 2020) contains four digit datasets: MNIST (Le Cun et al., 1998), MNIST-M (Ganin & Lempitsky, 2015), SVHN (Netzer et al., 2011), SYN (Le Cun et al., 1998). (2) PACS (Li et al., 2017) is an object classification benchmark with four domains [...]. (3) VLCS Fang et al. (2013) comprises photographic domains [...]. The four datasets are: UCI daily and sports dataset (DSADS) (Barshan & Yüksek, 2014) [...]. USC-HAD (Zhang & Sawchuk, 2012). [...]. UCI-HAR (Anguita et al., 2012). [...]. PAMAP2 (Reiss & Stricker, 2012). [...]. EMG for gestures Data Set Lobov et al. (2018)
Dataset Splits Yes The training data are randomly split into two parts: 80% for training and 20% for validation. The best model on the validation split is selected to evaluate the target domain. [...] For each dataset, we split data into four parts according to the persons and utilize 0,1,2, and 3 to denote four domains. [...] We choose DSADS since it contains sensors worn on five different positions. Therefore, we split DSADS into five domains according to sensor positions. [...] We randomly divide 36 subjects into four domains without overlapping and each domain contains data of 9 persons. [...] We randomly choose two subjects from USC-HAD and each subject serves as one domain.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications. It only mentions 'All methods are implemented with Py Torch (Paszke et al., 2019)'.
Software Dependencies No The paper mentions that 'All methods are implemented with Py Torch (Paszke et al., 2019)'. However, it does not specify a version number for PyTorch or any other software libraries or solvers.
Experiment Setup Yes The maximum training epoch is set to 120. The initial learning rate for Digits-DG is 0.01 while 0.005 for the other two datasets. The learning rate is decayed by 0.1 twice at the 70% and 90% of the max epoch respectively. [...] The maximum training epoch is set to 150. The Adam optimizer with weight decay 5 × 10−4 is used. The learning rate for the rest methods is 10−2. We tune hyperparameters for each method.