One-Step Generalization Ratio Guided Optimization for Domain Generalization

Authors: Sumin Cho, Dongwon Kim, Kwangsu Kim

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validated GENIE on five standard DG datasets(Li et al., 2017; Fang et al., 2013; Venkateswara et al., 2017; Beery et al., 2018; Peng et al., 2019) where it consistently outperformed established optimizers, even with extended iterations. Furthermore, using our optimizer in existing DG and Single-DG (SDG) algorithms enhances their performance. We summarize our contributions as follows: We propose GENIE, a novel optimizer that addresses the overlooked issue of parameter imbalance in DG. It suppresses over-predictive parameters while promoting balanced parameter updates. We incorporate OSGR, previously used as a generalization metric, into the optimizer’s core principle. This provides an efficient and novel perspective on generalization for addressing DG. GENIE is a domain-agnostic optimizer. It is validated across multiple DG benchmarks and SDG tasks, demonstrating its broad applicability and scalability.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, University of Sungkyunkwan, Suwon, Korea. Correspondence to: Sumin Cho <EMAIL>, Dongwon Kim <EMAIL>, Kwangsu Kim <EMAIL>.
Pseudocode Yes Algorithm 1 Algorithm for GENIE
Open Source Code No The paper does not provide an explicit statement about open-source code release or a link to a code repository in the main text or supplementary materials. Appendix D.2 contains a code snippet, but no statement of release.
Open Datasets Yes Our approach was evaluated on five DG benchmark datasets: PACS (Li et al., 2017), VLCS (Fang et al., 2013), Office Home (Venkateswara et al., 2017), Terra Incognita (Beery et al., 2018), and Domain Net (Peng et al., 2019).
Dataset Splits Yes We followed the standardized protocols of Domain Bed (Gulrajani & Lopez-Paz, 2021), which include dataset splits, hyperparameter searches, and model selection using validation sets. [...] For all DG and SDG experiments, we employed the Training-domain Validation Set approach, partitioning the source domain into training and validation subsets. The optimal model was selected based on validation performance. We followed previous DG methods by constructing 20 train-validation splits, with each split repeated 3 times.
Hardware Specification Yes All experiments were conducted on an NVIDIA GeForce RTX 4090 under the environment of Python 3.8.10, PyTorch 1.13.1, Torchvision 0.14.1, and CUDA 11.7.
Software Dependencies Yes All experiments were conducted on an NVIDIA GeForce RTX 4090 under the environment of Python 3.8.10, PyTorch 1.13.1, Torchvision 0.14.1, and CUDA 11.7.
Experiment Setup Yes In accordance with Domain Bed protocols, models were trained for 15,000 iterations on Domain Net and 5,000 iterations on the other datasets. For all DG and SDG experiments, we employed the Training-domain Validation Set approach, partitioning the source domain into training and validation subsets. The optimal model was selected based on validation performance. We used ResNet-50 (He et al., 2016b) pre-trained on ImageNet (He et al., 2016a) as backbone architectures. Detailed implementation details are presented in Appendix D. The search space of hyperparameters is provided in Table 8.