Learning Fine-grained Domain Generalization via Hyperbolic State Space Hallucination
Authors: Qi Bi, Jingjun Yi, Haolan Zhan, Wei Ji, Gui-Song Xia
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three FGDG benchmarks demonstrate its state-of-the-art performance. |
| Researcher Affiliation | Academia | 1School of Artificial Intelligence, Wuhan University, Wuhan, China 2Faculty of Information Technology, Monash University, Melbourne, Australia 3School of Medicine, Yale University, New Haven, United States |
| Pseudocode | No | The paper describes the methodology using mathematical equations and block diagrams (Figure 3) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/Bi Qi WHU/HSSH |
| Open Datasets | Yes | CUB-200-2011 (Wah et al. 2011) and CUB-200-Paintings (CUB-P, denoted as P) (Wang et al. 2020). Million-AID (Long et al. 2021) (MAID, denoted as M) and NWPU-RESISC45 (Cheng, Han, and Lu 2017) (NWPU, denoted as N). Caltech-UCSD Birds200-2011 (CUB-200-2011, denoted as C) (Wah et al. 2011), NABirds (denoted as N) (Van Horn et al. 2015), and iNaturalist2017 (i Nat2017, denoted as I) (Van Horn et al. 2018). |
| Dataset Splits | No | The paper describes the datasets used and how categories were selected (e.g., "common fine-grained categories"), but it does not specify explicit training, validation, or test dataset splits (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not specify any software names with version numbers (e.g., programming languages, libraries, or frameworks with their versions). |
| Experiment Setup | Yes | For all experiments across the three FGDG settings, the Adam optimizer is employed with a learning rate of 1e-4, and momentum parameters set to 0.9 and 0.99. The training process spans 100 epochs. ...λ is set to 0.5. |