Class Semantic Attribute Perception Guided Zero-Shot Learning
Authors: Qin Yue, Junbiao Cui, Jianqing Liang, Liang Bai
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the effectiveness of the proposed method on zero-shot learning benchmark data sets. ... Extensive experiments on ZSL benchmark data sets validate the effectiveness of the proposed method compared to state-of-the-art methods. ... We conduct ablation experiments to demonstrate the effectiveness of different components in CSAP-ZSL. |
| Researcher Affiliation | Academia | Qin Yue, Junbiao Cui, Jianqing Liang*, Liang Bai* Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, School of Computer and Information Technology, Shanxi University, Taiyuan, 030006, Shanxi, China EMAIL,EMAIL |
| Pseudocode | No | The paper describes the methodology using mathematical formulas and descriptive text, but it does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | In the experiments, three benchmark ZSL data sets AWA2 (Animals with Attributes2) (Xian et al. 2019a), CUB (Caltech-UCSD Birds-200-2011) (Welinder et al. 2010), and SUN (SUN Attribute) (Patterson and Hays 2012) are used for evaluating the performance of the proposed method. |
| Dataset Splits | Yes | The data sets and data split both follow the literature (Xian et al. 2019a). The detailed information of data sets is summarized in Table 1. ... Table 1: The basic information of three zero-shot classification benchmark data sets. ... Training Samples ... Test Samples |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for experiments. |
| Software Dependencies | No | The paper mentions 'Adam (Kingma and Ba 2015) optimizer' and 'vision transformer (Dosovitskiy et al. 2021)', but does not provide specific version numbers for any software libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | Specifically, the trade-off parameters α and β both are searched in the set {1e 3, 5e 4, 1e 4, 5e 5, 1e 5}. The number of clusters in graph cut K is set to 7, 8, and 8 for data sets AWA2, CUB, and SUN, respectively. The radius of the similarity graph of regions is set to 6, 5, and 10 for data sets AWA2, CUB, and SUN, respectively. The calibration factor γ is set to 0.9, 0.9, and 0.3 for data sets AWA2, CUB, and SUN, respectively. Finally, we adopt the Adam (Kingma and Ba 2015) optimizer in the experiments. The learning rate of networks AEN, PGN, and APMN all is searched in the set {1e 2, 5e 3, 1e 3}. The learning rate of vision transformer is set to 5e 6, 5e 5 and 1e 6 for data sets AWA2, CUB and SUN, respectively. The batch size is set to 200 on all data sets. We take an image of size 3 224 224 as input. |