Learning Compact Semantic Information for Incomplete Multi-View Missing Multi-Label Classification

Authors: Jie Wen, Yadong Liu, Zhanyan Tang, Yuting He, Yulong Chen, Mu Li, Chengliang Liu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on multiple benchmarks validate our advantages and demonstrate strong compatibility with both missing and complete data. 4. Experiments 4.1. Experimental Settings 4.2. Experimental Results and Analysis
Researcher Affiliation Academia 1School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, 518000 China 2Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106 USA.
Pseudocode Yes Algorithm 1 Training process of COME
Open Source Code No The paper does not provide an explicit statement about releasing open-source code or a link to a code repository.
Open Datasets Yes Datasets: In line with previous works (Tan et al., 2018; Liu et al., 2023), we conduct experiments on five multi-view multi-label datasets, i.e., Corel5k (Duygulu et al., 2002), Pascal07 (Everingham et al., 2010), ESPGame (Von Ahn & Dabbish, 2004), IAPRTC12 (Grubinger et al., 2006), and Mirflickr (Huiskes & Lew, 2008).
Dataset Splits Yes (3) Dataset Splitting: Subsequently, 70% of the resulting samples are randomly selected as the training set.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Algorithm 1 Training process of COME ... Initialization: Initialize the parameters of the model A and set hyper-parameters (λ1, λ2, β, and training epochs E) ... when the value of β is 0.1 and 1 for Corel5k and Pascal07 datasets, respectively, information compression and effective information reconstruction reach a balanced state, and the model achieves the optimal performance.