CFDM: Contrastive Fusion and Disambiguation for Multi-View Partial-Label Learning
Authors: Qiuru Hai, Yongjian Deng, Yuena Lin, Zheng Li, Zhen Yang, Gengyu Lyu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on multiple datasets have demonstrated that our proposed method is superior to other state-of-the-art methods. Experimental Settings Datasets. To evaluate the performance of our proposed CFDM, we conducted experiments on six synthetic MVPLL datasets |
| Researcher Affiliation | Collaboration | 1College of Computer Science, Beijing University of Technology 2Idealism Beijing Technology Co., Ltd. |
| Pseudocode | Yes | Algorithm 1: Pseudo-code of CFDM (one epoch) |
| Open Source Code | No | The paper does not provide any explicit statements about code availability, nor does it include links to a code repository or mention code in supplementary materials. |
| Open Datasets | Yes | To evaluate the performance of our proposed CFDM, we conducted experiments on six synthetic MVPLL datasets, which are generated from the widely-used multi-view datasets, including MSRCv1 (Xu, Han, and Nie 2016), Caltech101-7 (Fei-Fei, Fergus, and Perona 2004), Mfeat (Wang, Yang, and Liu 2019), Scence15 (Fei-Fei and Perona 2005), CCV (Jiang et al. 2011), Caltech101-all (Fei Fei, Fergus, and Perona 2004) |
| Dataset Splits | Yes | For all experiments, we utilize 5-fold cross-validation, and record the mean and standard deviation (mean std) as the final results. |
| Hardware Specification | Yes | All experiments are conducted on a machine equipped with an Intel(R) Xeon(R) Gold 6148 2.40GHz CPU, Ge Force RTX 3090 GPU, and 512GB RAM. |
| Software Dependencies | No | The paper mentions using Pytorch (Paszke et al. 2019) and the Adam optimizer but does not specify version numbers for PyTorch or any other libraries or software components. |
| Experiment Setup | Yes | The learning rate is chosen from {1e 4, 3e 4, 5e 4}. The hyperparameter γ is set to 0.99, τ is set to 0.07, while β linearly decreases from 0.95 to 0.8. The number of training epochs is set to 130. |