Few-Shot Fine-Grained Image Classification with Progressively Feature Refinement and Continuous Relationship Modeling
Authors: Zhen-Xiang Ma, Zhen-Duo Chen, Tai Zheng, Xin Luo, Zixia Jia, Xin-Shun Xu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted extensive experiments on five fine-grained benchmark datasets, and the experimental results demonstrate that the proposed method is comprehensively ahead of the existing State-of-the-Art methods. |
| Researcher Affiliation | Academia | Zhen-Xiang Ma, Zhen-Duo Chen*, Tai Zheng, Xin Luo, Zixia Jia, Xin-Shun Xu School of Software, Shandong University, Jinan, China EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods in textual paragraphs and mathematical formulations, but it does not contain a clearly labeled section or figure for 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | The paper does not explicitly state that the authors are releasing their code or provide a link to a code repository. Table 2 mentions "indicates our implementation based on the public code" which refers to baseline implementations, not their own. |
| Open Datasets | Yes | We evaluated the proposed method on five fine-grained few-shot benchmark datasets: CUB-200-2011 (Wah et al. 2011), Stanford Dogs (Khosla et al. 2011), Stanford Cars (Krause et al. 2013), meta-i Nat (Horn et al. 2018; Wertheimer and Hariharan 2019), and tiered meta-i Nat (Wertheimer and Hariharan 2019). |
| Dataset Splits | Yes | We follow the dataset splitting way most commonly used by current State-of-the-Art methods (Ma et al. 2024; Zhu, Liu, and Jiang 2020), as detailed in Table 1. ... Episodic Meta-testing Details: We apply the standard 5-way 1-shot and 5-way 5-shot settings, with 15 query images per class. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions using the "SGD optimizer with Nesterov momentum" but does not specify any software libraries or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | During model training, we used the SGD optimizer with Nesterov momentum of 0.9. The initial learning rate is set to 0.1, and the weight decay is set to 5e-4. Episodic Meta-training Details: For both Conv-4 and Res Net-12, the size of the input images are resized to 84 84. We apply data augmentation techniques consistent with existing methods for all benchmark datasets, including random crop, horizontal flip, and color jitter. |