Implicit Relative Labeling-Importance Aware Multi-Label Metric Learning

Authors: Jun-Xiang Mao, Yong Rui, Min-Ling Zhang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on benchmark multi-label datasets validate the superiority of our proposed approach in learning effective similarity metrics between multi-label examples.
Researcher Affiliation Collaboration 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China ... EMAIL, ... EMAIL" and "4Lenovo Research, Lenovo Group Ltd., Beijing, China ... EMAIL
Pseudocode No The complete procedure of the proposed ILIA approach is summarized in Appendix A. (Appendix A is not provided in the given text.)
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes In this paper, ten real-world multi-label datasets with diversified properties are employed for comparative studies. Table 1 summarizes the detailed characteristics of each benchmark dataset D, including the number of examples |D|, number of features dim(D), number of labels L(D), label cardinality LCard(D), and domain of datasets. 1 http://mulan.sourceforge.net/datasets.html 2 http://palm.seu.edu.cn/zhangml/Resources.htm#data
Dataset Splits Yes Ten-fold cross-validation is employed to evaluate the above compared approaches in this paper.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes For the proposed ILIA approach, we use the Polynomial kernel and set the parameters as follows: the trade-off parameters µ = 10 3, η = 10 2, γ = 10 2, and the number of nearest neighbors k = 20. ... For KNN and MLKNN, the number of nearest neighbors is fixed to 10 for fair comparisons.