GenZSL: Generative Zero-Shot Learning Via Inductive Variational Autoencoder

Authors: Shiming Chen, Dingjie Fu, Salman Khan, Fahad Shahbaz Khan

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on three popular benchmark datasets showcase the superiority and potential of our Gen ZSL with significant efficacy and efficiency over f-VAEGAN, e.g., 24.7% performance gains and more than 60 faster training speed on AWA2.
Researcher Affiliation Academia 1 Mohamed bin Zayed University of Artificial Intelligence 2Huazhong University of Science and Technology 3Australian National University 4Linköping University.
Pseudocode No The paper describes the methods using textual explanations, equations, and a pipeline diagram (Figure 2), but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Codes are available at https: //github.com/shiming-chen/Gen ZSL.
Open Datasets Yes We evaluate our Gen ZSL on three well-known ZSL benchmark datasets, i.e., two fine-grained datasets ( CUB (Welinder et al., 2010) and SUN (Patterson & Hays, 2012)) and one coarse-grained dataset (AWA2 (Xian et al., 2019a)).
Dataset Splits Yes We use the training splits proposed in (Xian et al., 2018). Meanwhile, the visual features with 512 dimensions are extracted from the CLIP vision encoder (Radford et al., 2021). ... CUB has 11,788 images of 200 bird classes (seen/unseen classes = 150/50). SUN contains 14,340 images of 717 scene classes (seen/unseen classes = 645/72). AWA2 consists of 37,322 images of 50 animal classes (seen/unseen classes = 40/10).
Hardware Specification Yes All experiments are performed on a single NVIDIA RTX 3090 with 24G memory.
Software Dependencies No We employ Pytorch to implement our experiments. While PyTorch is mentioned, a specific version number is not provided, nor are other key software dependencies with their versions.
Experiment Setup Yes We synthesize 1600, 800, and 5000 features per unseen class to train the classifier for CUB, SUN, and AWA2 datasets, respectively. We empirically set the loss weight λ as 0.1 for CUB and AWA2, and 0.001 for SUN. The top-2 similar classes serve as the referent classes for inductions on all datasets. ... Accordingly, we empirically set these hyperparameters {λ, k, Nsyn} as {0.1, 2, 1600}, {0.001, 2, 800} and {0.1, 2, 5000} for CUB, SUN and AWA2, respectively.