Deep Fuzzy Multi-view Learning for Reliable Classification
Authors: Siyuan Duan, Yuan Sun, Dezhong Peng, Guiduo Duan, Xi Peng, Peng Hu
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our FUML achieves state-of-the-art performance in terms of both accuracy and reliability. We conduct extensive experiments comparing our FUML against 13 state-of-the-art MVC baselines on eight widely-used benchmarks, demonstrating superior accuracy, reliability, and robustness. |
| Researcher Affiliation | Collaboration | 1College of Computer Science, Sichuan University, Chengdu, China. 2 Sichuan National Innovation New Vision UHD Video Technology Co., Ltd, Chengdu, China. 3Tianfu Jincheng Laboratory, Chengdu, China. 4 School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China. |
| Pseudocode | Yes | The pseudo-code of our FUML can be found in the Appendix B.3. Algorithm 1 FUML algorithm |
| Open Source Code | Yes | The code of our FUML is available here1. 1https://github.com/siyuancncd/FUML |
| Open Datasets | Yes | To validate the effectiveness of the proposed FUML, we conduct experiments on eight public datasets: Handwritten (HW)2, MSRC-V1 (MSRC) (Winn & Jojic, 2005), NUS-WIDE-OBJ (NUSOBJ)3, Fashion-MV (Fashion) (Wang et al., 2023), Scene15 (Scene)4, Land Use (Yang & Newsam, 2010), Leaves100 (Leaves)5, and PIE6. |
| Dataset Splits | Yes | The training set and the test set are split in a ratio of 8:2. |
| Hardware Specification | Yes | All experiments are implemented in Py Torch and are carried out on NVIDIA Tesla V100S. |
| Software Dependencies | No | All experiments are implemented in Py Torch and are carried out on NVIDIA Tesla V100S. During the training phase, our FUML uses Adam (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999, a weight decay of 0.0001, and a maximum of 500 epochs. While PyTorch is mentioned, no specific version number is provided for it or any other software library. |
| Experiment Setup | Yes | During the training phase, our FUML uses Adam (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999, a weight decay of 0.0001, and a maximum of 500 epochs. The p in Equation (3) is set to 3. For the NUSOBJ and Fashion datasets, the learning rate is set to 0.0002 and the batch size to 400, while for the remaining six datasets, the learning rate is set to 0.001 and the batch size to 100. |