Adaptive-Grained Label Distribution Learning
Authors: Yunan Lu, Weiwei Li, Dun Liu, Huaxiong Li, Xiuyi Jia
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct extensive experiments on real-world datasets to demonstrate the advantages of our proposal. |
| Researcher Affiliation | Academia | Yunan Lu1, Weiwei Li2, Dun Liu3, Huaxiong Li4, Xiuyi Jia1* 1 School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China 2 College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China 3 School of Economics and Management, Southwest Jiaotong University, Chengdu, China 4 Department of Control Science and Intelligence Engineering, Nanjing University, Nanjing, China EMAIL |
| Pseudocode | Yes | Algorithm 1: Label coarsening function f Input: training set (X, D), a set distance function dset( , ); Output: coarse-grained labels Y and partitions of training set π; 1: Y 1 Initilize a matrix of coarse-grained labels.; 2: for i = 1, 2, , M do 3: πi {{u}}N u=1; Initialize a partition of training set. 4: π i πi; Initialize the optimal partition. 5: γ 0; Initialize the maximum FDGI. 6: while |πi| > 1 do 7: v 1, v 2 arg minv1 =v2 πi πi dset(v1, v2); Find two groups closest to each other in terms of d i. 8: πi πi\v1\v2 (v1 v2); Merge the two closest groups to update the trainint set partition. 9: if γ(X, πi, d i) > γ then 10: γ γ(X, πi, d i); Update FDGI. 11: π i πi; Update the optimal partition. 12: for k = 1, 2, , |π i | do 13: π ik the group in π i which ranks the k-th position in ascending order w.r.t. the average LDD; 14: for u π ik do 15: yui k; Assign the coarse-grained label. return Y , π. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code, nor does it include a link to a code repository or mention code in supplementary materials. |
| Open Datasets | Yes | We adopt six datasets from several representative real-world tasks, including JAFFE (Lyons et al. 1998) from a facial emotion recognition task, Movie (Geng 2016) from a movie rating prediction task, Emotion6 (Peng et al. 2015) and Painting (Machajdik and Hanbury 2010) from image sentiment recognition tasks, M2B (Nguyen et al. 2012) and FBP5500 (Liang et al. 2018) from facial beauty perception tasks. |
| Dataset Splits | Yes | Given a dataset, it is randomly divided into two chunks (70% for training and 30% for testing). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions using an ordinal logistic model (All-Threshold variant) and preprocessing features by min-max normalization, but it does not specify version numbers for any software libraries or programming languages used. |
| Experiment Setup | Yes | The hyperparameter K in AAk NN is selected from {5, 6, ..., 10}; the hyperparameters λ1 and λ2 in BD-LDL are both selected from {10^-3, 10^-2, ..., 10^3}; the hyperparameters λ and β in LDL-LRR are selected from {10^-6, 10^-5, ..., 10^-1} and {10^-3, 10^-2, ..., 10^2}, respectively; in LDL-LDM, the hyperparameters λ1, λ2, and λ3 are selected from {10^-3, 10^-2, ..., 10^3}, the hyperparameter g is selected from {1, 2, ..., 14}. In our framework, we use the ordinal logistic model (All-Threshold variant) proposed in (Rennie and Srebro 2005), abbreviated as Logistic AT, as the CGL predictor whose L2 regularization weight is selected from {1, (2n)^-1, 2n}^5 n=1. ... the number of Monte Carlo samples L is set to 20 in this paper. ... ζ is set to 0.05 in this paper. |