DGFamba: Learning Flow Factorized State Space for Visual Domain Generalization
Authors: Qi Bi, Jingjun Yi, Hao Zheng, Haolan Zhan, Wei Ji, Yawen Huang, Yuexiang Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on various visual domain generalization settings show its state-of-the-art performance. The paper includes sections such as 'Experiments', 'Datasets & Evaluation Protocols', 'Comparison with State-of-the-art', and 'Ablation Studies', which involve analyzing performance on datasets and presenting results in tables. |
| Researcher Affiliation | Collaboration | The authors list affiliations with '1Jarvis Research Center, Tencent You Tu Lab, Shen Zhen, China' (Industry), '2Faculty of Information Technology, Monash University, Melbourne, Australia' (Academia), '3School of Medicine, Yale University, New Haven, United States' (Academia), and '4Faculty of Science and Technology, University of Macau, Macau' (Academia). The presence of both Tencent (industry) and multiple universities (academia) indicates a collaborative affiliation. |
| Pseudocode | No | The paper describes the methodology and procedures in paragraph form using mathematical equations and figures, but it does not contain any structured pseudocode or algorithm blocks labeled as such. |
| Open Source Code | No | The paper does not provide any explicit statements about the availability of source code for the described methodology, nor does it include links to a code repository. |
| Open Datasets | Yes | Our experiments are conducted on four visual domain generalization datasets. Specifically, PACS (Li et al. 2017), VLCS (Fang, Xu, and Rockmore 2013), Office Home (Venkateswara et al. 2017), Terra Incognita (Beery, Van Horn, and Perona 2018). |
| Dataset Splits | Yes | Following the evaluation protocols of existing methods (Gulrajani and Lopez-Paz 2020; Cha et al. 2021), experiments are conducted under the leave-one-domain-out protocol, where only one domain is used as the unseen target domain and the rest domains are used as the source domains for training. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as exact GPU or CPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions using VMamba as the backbone and the AdamW optimizer, but it does not specify version numbers for any software dependencies, libraries, or programming languages used. |
| Experiment Setup | Yes | The training terminates after 10000 iterations, with a batch size of 16 per source domain. The Adam W optimizer is used for optimization, with a momentum value of 0.9 and an initial learning rate of 3 10 4. In addition, the cosine decay learning rate scheduler is adapted. |