Label Distribution Learning with Biased Annotations Assisted by Multi-Label Learning
Authors: Zhiqiang Kou, Si Qin, Hailin Wang, Jing Wang, Mingkun Xie, Shuo Chen, Yuheng Jia, Tongliang Liu, Masashi Sugiyama, Xin Geng
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The effectiveness of the method is validated through comprehensive experiments, with superior performance demonstrated and insights verified. (Section 5). We evaluate our proposed method on 12 real-world datasets. The datasets cover diverse domains: Flickr, Twitter [Yang et al., 2017], and Emotion6 [Peng et al., 2015] describe emotional responses to images. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, China 2Key Laboratory of New Generation Artifcial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China 3RIKEN Center for Advanced Intelligence Project, Tokyo 103-0027, Japan 4Graduate School of Frontier Sciences, The University of Tokyo, Japan 5Sydney AI Centre, The University of Sydney, Australia 6School of Mathematics and Statistics, Xi an Jiaotong University, China |
| Pseudocode | Yes | To solve model (3), we relax the rank by its convex alternative, nuclear norm [Gu et al., 2014], and then apply the ADMM [Boyd et al., 2011] for efficient optimization. The corresponding augmented Lagrangian function is: L(W, O, D, Z, Λ) = Z + α WX D 2 F + β ˆDO ˆL 2 F + γ DO ˆL 2 F + η D ˆD 2 F + λ1 W 2 F + λ2 O 2 F + Λ, Z WXO + ρ 2 Z WXO 2 F , where Z is a splitting variable of WXO, Λ is the Lagrange multiplier, and ρ is positive penalty parameter. The optimization is performed by iteratively updating W, O, D, Z, Λ as follows: 1) W-subproblem is formulated as: ... 2) O-subproblem is formulated as: ... 3) D-subproblem is formulated as: ... 4) Z-subproblem is formulated as: ... 5) Finally, update the Lagrange Multipliers... |
| Open Source Code | No | The paper does not provide any concrete statement or link regarding the public availability of its source code. |
| Open Datasets | Yes | We evaluate our proposed method on 12 real-world datasets. The datasets cover diverse domains: Flickr, Twitter [Yang et al., 2017], and Emotion6 [Peng et al., 2015] describe emotional responses to images. Fbp5500 and SCUT-FBP focus on facial beauty perception [Ren and Geng, 2017]. RAF-ML is a text dataset for sentiment analysis [Li and Deng, 2019]. The Gene dataset analyzes relationships between genes and diseases [Yu et al., 2012]. Scene is derived from a multi-label dataset by converting label rankings into label distributions [Geng and Xia, 2014]. SJAFFE and SBU-3DFE are facial emotion datasets collected from JAFFE [Lyons et al., 1998] and BU-3DFE [Yin et al., 2006], respectively. Finally, Spo5 and Spoem are yeast datasets obtained from biological experiments [Geng, 2016]. |
| Dataset Splits | Yes | Each method was evaluated using ten-fold cross-validation to ensure robustness. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | The hyperparameters for all methods were set according to their respective publications. For BLDL, the parameters α, β, γ, λ1, and λ2 were fine-tuned over the range {0.1, 0.05, 0.01, 0.005, 0.001}. The parameter η was selected from {1, 10, 50, 100, 150}, and T was fixed at 0.5. |