Enhancing Multi-View Classification Reliability with Adaptive Rejection
Authors: Wei Liu, Yufei Chen, Xiaodong Yue
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The effectiveness of our method is demonstrated through comprehensive theoretical analysis and empirical experiments on various multi-view datasets, establishing its superiority in enhancing the reliability of multi-view classification. |
| Researcher Affiliation | Academia | Wei Liu1, Yufei Chen1 *, Xiaodong Yue2 1 School of Computer Science and Technology, Tongji University, Shanghai, China 2 Artificial Intelligence Institute of Shanghai University, Shanghai University, Shanghai, China EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes mathematical formulations and propositions but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | Datasets: We conducted experiments on six real-world multi-view datasets as follows: ANIMAL (Lampert, Nickisch, and Harmeling 2013) [...] HAND (van Breukelen et al. 1998) [...] CUB (Wah et al. 2011) [...] SCENE (Fei-Fei and Perona 2005) [...] MRNet (Bien et al. 2018) [...] CAL (Fei-Fei, Fergus, and Perona 2004) |
| Dataset Splits | Yes | For all multi-view datasets, the data was split into training (70%), testing (20%), and calibration (10%) sets. |
| Hardware Specification | Yes | The model was implemented in Py Torch and run on a Ge Force RTX 4090 GPU with 24GB memory. |
| Software Dependencies | No | The Adam optimizer (Kingma and Ba 2014) was used for network training, with l2-norm regularization set to 1e 5. [...] The model was implemented in Py Torch and run on a Ge Force RTX 4090 GPU with 24GB memory. The paper mentions Py Torch and Adam optimizer but does not specify their version numbers. |
| Experiment Setup | Yes | The Adam optimizer (Kingma and Ba 2014) was used for network training, with l2-norm regularization set to 1e 5. A 5-fold cross-validation was employed to select the learning rate from 1e 5, 3e 4, 1e 3, 3e 3 . For all multi-view datasets, the data was split into training (70%), testing (20%), and calibration (10%) sets. |