Approximately Correct Label Distribution Learning

Authors: Weiwei Li, Haitao Wu, Yunan Lu, Xiuyi Jia

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theoretical analysis and empirical results demonstrate the effectiveness of the proposed solution. In this section, extensive experiments are conducted to illustrate the superiority of the µ metric and δ-LDL.
Researcher Affiliation Academia 1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China 2School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China 3Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China. Correspondence to: Xiuyi Jia <EMAIL>.
Pseudocode Yes Algorithm 1 Adaptive Simpson s rule: ASR Input: Interval bounds a, b; error tolerance ε; current integral estimate s; maximum recursion depth ξ. Output: Integral estimate s . 1: c (a+b)/2; 2: l φ0(a, c); Equation (15) 3: r φ0(c, b); 4: if |l + r s| 15ε or t 0 then 5: return l + r + (l+r s)/15; Equation (17) 6: end if 7: return ASR(a, c, ε/2, l, ξ 1) + ASR(c, b, ε/2, r, ξ 1); Algorithm 2 Our proposed algorithm: δ-LDL Input: Training set {(xi, di)}m i=1, maximum recursion depth ξ, test sample x . Output: Label distribution d .
Open Source Code Yes Details of all implementations are openly accessible at Git Hub.2 https://github.com/Sprite Misaka/Py LDL
Open Datasets Yes We adopt several widely used label distribution datasets, including: M2B (Nguyen et al., 2012), fbp5500 (Liang et al., 2018), RAF ML (Li & Deng, 2019), SBU 3DFE (Yin et al., 2006), Natural Scene (Geng et al., 2021), Music (Lee et al., 2021) and Painting (Machajdik & Hanbury, 2010).
Dataset Splits Yes To ensure a fair comparison, for each dataset and for each method we conduct ten-fold experiments repeated 10 times, and the average performance is recorded.
Hardware Specification No No specific hardware details (like GPU/CPU models or specific machine configurations) are provided in the paper. The paper only mentions general computational aspects such as time complexity analyses.
Software Dependencies No The paper mentions 'Adam (Kingma & Ba, 2015)' as the optimization method and 'Re LU function' for activation, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used for implementation.
Experiment Setup Yes in Equation (14), σ( ) is implemented as the Re LU function; in Algorithm 2, ξ is set to 5; in Equation (18), serving as a performance baseline, and EPS can be a small positive number, e.g., 10 7. The above loss function can be optimized by gradient descent methods, such as Adam (Kingma & Ba, 2015). The LDL model f is implemented by a naive network with the parameter matrix Θ Rq c: f(x; Θ) = ς(xΘ), (20) where ς( ) is the softmax function as the final mathematical processing.