Open-Set Heterogeneous Domain Adaptation: Theoretical Analysis and Algorithm

Authors: Thai-Hoang Pham, Yuanlong Wang, Changchang Yin, Xueru Zhang, Ping Zhang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments across text, image, and clinical data demonstrate the effectiveness of our algorithm. ... We conduct experiments on real data from clinical, computer vision, and natural language processing domains to validate the effectiveness of our method for OSHe DA. ... We conduct our experiments on 7 datasets including CIFAR10 (Krizhevsky 2009) & ILSVRC2012 (Russakovsky et al. 2015)...
Researcher Affiliation Academia 1Department of Computer Science and Engineering, The Ohio State University, USA 2Department of Biomedical Informatics, The Ohio State University, USA EMAIL
Pseudocode Yes Figure 2 presents the overall architecture of RL-OSHe DA, while pseudo code describing training process can be found in Appendix B.2.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes We conduct our experiments on 7 datasets including CIFAR10 (Krizhevsky 2009) & ILSVRC2012 (Russakovsky et al. 2015); Wikipedia (Rasiwasia et al. 2010); Multilingual Reuters Collection (Amini, Usunier, and Goutte 2009); NUSWIDE (Chua et al. 2009) & Image Net (Deng et al. 2009); Office (Saenko et al. 2010) & Caltech256 (Griffin et al. 2007); Image CLEF-DA (Griffin et al. 2007); PTB-XL (Wagner et al. 2020).
Dataset Splits No The paper mentions using several datasets for experiments but does not explicitly provide details about training, validation, or test dataset splits in the main text. It refers to Appendix C.1 for 'Detailed descriptions and statistics of these datasets' but does not specify the splits directly in the main body.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., GPU models, CPU types, memory) used for conducting the experiments.
Software Dependencies No The paper does not mention specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their versions) used for the experiments.
Experiment Setup No The paper describes the objective function and the 2-stage learning process, including the role of Lcls, Linv, Lseg, and Losd, and the pseudo-labeling strategy. It also states 'Detailed architectures of our model and the baselines are in Appendix B.1.' and 'pseudo code describing training process can be found in Appendix B.2.'. However, it does not provide specific numerical hyperparameters such as learning rates, batch sizes, number of epochs, or optimizer settings in the main text.