Attribute Prediction as Multiple Instance Learning

Authors: Diego Marcos, Aike Potze, Wenjia Xu, Devis Tuia, Zeynep Akata

TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on CUB-200-2011, SUN Attributes and Aw A2 show improvements on attribute detection, attribute-based zero-shot classification and weakly supervised part localization. We evaluate AMIL using the image-level attribute annotations where available. Then, we evaluate the learned attributes on attribute-based downstream tasks: zero-shot classification and part localization.
Researcher Affiliation Academia Inria, France Wageningen University, The Netherlands Chinese Academy of Sciences, China EPFL, Switzerland University of Tübingen, Germany
Pseudocode No The paper describes methods using mathematical formulations and textual descriptions but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about the release of source code, nor does it provide a link to a code repository.
Open Datasets Yes We use three datasets with attribute annotations: CUB-200-2011 (CUB) (Wah et al., 2011), SUN Attribute Dataset (SUN) (Patterson & Hays, 2012) and Animals With Attributes (Aw A2) (Xian et al., 2018a).
Dataset Splits Yes In all experiments we use the train-test splits proposed for ZSL in (Xian et al., 2018a) such that the evaluation is always performed on unseen classes.
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., GPU models, CPU types, or memory).
Software Dependencies No The paper mentions using ResNet50 as an image encoder and the Adam optimizer but does not provide specific version numbers for these or other software libraries/frameworks.
Experiment Setup Yes All models are trained for three epochs with a multi-label binary cross-entropy loss or a noise robust loss... We use the Adam optimizer with a learning rate of 0.0001 for the attribute prediction base model and 0.001 for learning the last linear layer of the attribute prediction model, with a learning rate decay of 0.25 after each epoch.