Indirect Alignment and Relationship Preservation for Domain Generalization

Authors: Wei Wei, Zixiong Li, Jing Yan, Mingwen Shao, Lin Li

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our framework consistently outperforms stateof-the-art methods.
Researcher Affiliation Academia 1Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi, China 2Qingdao Institute of Software, College of Computer Science and Technology, China University of Petroleum (East China), Qingdao, Shandong, China
Pseudocode No The paper describes the proposed method verbally and mathematically, including equations for loss functions, but does not present any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement regarding the release of source code for the described methodology, nor does it include any links to code repositories.
Open Datasets Yes To thoroughly evaluate the proposed method, we selected four challenging domain generalization datasets: PACS [Li et al., 2017], Office-Home [Venkateswara et al., 2017], VLCS [Fang et al., 2013], Terra Incognita [Beery et al., 2018].
Dataset Splits Yes A leave-one-domain strategy is adopted, where one domain serves as the target domain, and the remaining domains are used as source domains for training. For model validation and selection, 20% of the samples from each source domain are set aside as the validation set.
Hardware Specification No The paper mentions using the PyTorch framework and a pre-trained ResNet-50 model, but does not specify any hardware details such as GPU models, CPU types, or memory used for the experiments.
Software Dependencies No The paper states, 'We use the PyTorch framework,' but it does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes The Adam optimizer is applied with a learning rate of 5e-5. Following the standard training and evaluation procedure of SWAD, we use a batch size of 32 for each domain. ... For the number of iterations, the Office-Home dataset is trained for 3,000 iterations, while the other datasets are trained for 5,000 iterations. The hyperparameters are set as follows: m1 = 0.1, m2 = 0.4, m3 = 0.1, and α = 0.01.