Bidirectional Logits Tree: Pursuing Granularity Reconcilement in Fine-Grained Classification

Authors: Zhiguang Lu, Qianqian Xu, Shilong Bao, Zhiyong Yang, Qingming Huang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of our proposed method. Extensive experiments and visualizations justify the effectiveness of our method.
Researcher Affiliation Academia 1Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences 2School of Computer Science and Technology, University of Chinese Academy of Sciences 3Key Laboratory of Big Data Mining and Knowledge Management, University of Chinese Academy of Sciences
Pseudocode No The paper describes the methodology in detail using mathematical formulas and descriptive text, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor structured steps formatted like code.
Open Source Code Yes Code https://github.com/Zhiguang Luu/Bi LT
Open Datasets Yes We evaluate methods on four datasets: FGVC-Aircraft (Maji et al. 2013), CIFAR-100 (Krizhevsky, Hinton et al. 2009), i Naturalist2019 (Van Horn et al. 2018) and tiered Image Net-H(Ren et al. 2018), all of which have been used in previous studies(Liang and Davis 2023; Garg, Sani, and Anand 2022; Bertinetto et al. 2020).
Dataset Splits Yes Dataset statistics and split settings are provided in the Appendix.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experimental setup. While PyTorch is cited in the references, no version is specified for its use in the experiments.
Experiment Setup No The 'Experiments Setup' section primarily discusses datasets and competitors. While the paper includes sensitivity analysis for parameters specific to its proposed method (α, β, ϵ, γ), it does not explicitly provide concrete hyperparameter values or training configurations like learning rate, batch size, or optimizer settings for the overall experimental setup in the main text.