Divide and Conquer: Learning Label Distribution with Subtasks

Authors: Haitao Wu, Weiwei Li, Xiuyi Jia

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our analysis and experiments demonstrate that S-LDL is effective and efficient. To the best of our knowledge, this paper represents the first endeavor to address LDL via subtasks. (...) In this section, we evaluate S-LDL via a series of experiments. Due to page limitations, datasets, comparison methods, parameter settings, and full experimental results are introduced in the appendix. Details of all implementations are openly accessible at Git Hub. (...) Tables 3 to 6 show representative results of the shallow/deep regime S-LDL and the remainder are in the appendix, where ( ) indicates that more than half of the metrics support that S-X is statistically superior (inferior) to the corresponding methods X (pairwise t-test at 0.05 significance level); there is no significant if neither nor is shown.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China 2College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China. Correspondence to: Xiuyi Jia <EMAIL>.
Pseudocode Yes Algorithm 1 Subtask construction Input: Input matrix D, trade-off parameter λ, anticipated number of subtasks T. Output: Subtask distribution matrices D (with corresponding subtask label spaces Y ). (...) Algorithm 2 S-LDL (shallow regime) Input: Feature matrix X, label distribution matrix D, testing instance x . Output: Predicted label distribution d for instance x .
Open Source Code Yes 4https://github.com/Sprite Misaka/PyLDL
Open Datasets Yes We adopt several widely used label distribution datasets, including: JAFFE (Lyons et al., 1998);5 fbp5500 (Liang et al., 2018);6 s BU 3DFE, Movie, Natural Scene, Yeast heat, Yeast diau, Yeast cold, and Yeast dtt provided by Geng (2016);7 emotion6, Twitter, and Flickr provided by Yang et al. (2017b).8
Dataset Splits Yes For each dataset we conduct ten-fold experiments repeated 10 times, and the average performance is recorded. (...) For Incom LDL, we follow the incomplete settings (Xu & Zhou, 2017) and vary the observed rate ω% from 20% to 40%.
Hardware Specification Yes All the results are obtained on a Linux workstation with Intel Core i9 (3.70GHz), NVIDIA Ge Force RTX 3090 (24GB), and 32GB memory.
Software Dependencies No The paper mentions using Adam for optimization but does not provide specific version numbers for any key software libraries or programming languages used for implementation.
Experiment Setup Yes The parameter settings of the proposed S-LDL and comparison algorithms are summarized in Table 9. (...) For all methods of the deep regime, the learning rate is chosen among {1, 2, 5} 10{ 4, 3, 2}, and the selection of the number of epochs is nested into a ten-fold cross validation. (...) S-LDL µ, λ, T 0.1, 0.2, 10