Noise-Resistant Label Reconstruction Feature Selection for Partial Multi-Label Learning

Authors: Wanfu Gao, Hanlin Pan, Qingqi Han, Kunpeng Liu

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on benchmark datasets demonstrate the superiority of the proposed method. Extensive experiments have been conducted on datasets in different fields, and the experimental results have demonstrated the superiority of the model. Section 4 is titled 'Experiments' and includes 'Datasets', 'Experimental Setup', 'Results', 'Parameter Analysis', and 'Ablation Study', providing tables and figures of empirical results and metrics.
Researcher Affiliation Academia Wanfu Gao1,2 , Hanlin Pan1,2 , Qingqi Han1,2 and Kunpeng Liu3 1College of Computer Science and Technology, Jilin University, China 2Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, China 3Department of Computer Science, Portland State University, Portland, OR 97201 USA. All affiliations listed are universities, indicating an academic setting.
Pseudocode Yes Algorithm 1 Pseudo code of PML-FSMIR
Open Source Code Yes The code is available at https://github.com/typsdfgh/PMLFSMIR
Open Datasets Yes We perform experiments on eight datasets from a broad range of applications: Birds [Briggs et al., 2013] audio, CAL500 [Turnbull et al., 2008] music classification, Corel5K [Duygulu et al., 2002] image annotation, LLOG F [Read, 2010] and Slashdot [Read, 2010] for text categorization, Water [Blockeel et al., 1999] for chemistry, Yeast [Elisseeff and Weston, 2001] for gene function prediction, and CHD49 [Shao et al., 2013] for medicine. Table 1 provides detailed characteristics of these experimental datasets. All datasets are well-known and cited with their respective papers.
Dataset Splits Yes On each dataset, ten-fold cross-validation is performed where the mean metric values as well as standard deviations are recorded for each compared method. We adopt ten-fold cross-validation to train these models and the selected features are compared on SVM classifier1.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. It only mentions 'on a SVM classifier' without any hardware specifications.
Software Dependencies No The paper mentions using an 'SVM classifier' but does not specify the version of the SVM library or any other software dependencies with version numbers.
Experiment Setup No The paper mentions regularization parameters α, β, and γ and discusses their sensitivity (Figure 5), but it does not explicitly state the specific hyperparameter values used for the final experimental results presented in the tables. It mentions a noisy level of 20% and feature selection percentages, but these are not detailed training configurations or hyperparameters for the model itself.