Decoupled Imbalanced Label Distribution Learning
Authors: Yongbiao Gao, Xiangcheng Sun, Miaogen Ling, Chao Tan, Yi Zhai, Guohua Lv
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our proposed DILDL outperforms other state-of-the-art methods for imbalance label distribution learning. In this section, we conduct extensive experiments on six ILDL datasets, which are sampled from standard LDL datasets, to assess the effectiveness of our proposed decoupled imbalance label distribution learning approach. |
| Researcher Affiliation | Academia | 1Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, China 2Shandong Provincial Key Laboratory of Computing Power Internet and Service Computing, Shandong Fundamental Research Center for Computer Science, Jinan, China 3Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Application (Southeast University), Ministry of Education, China 4Shandong Key Laboratory of Ubiquitous Intelligent Computing, Jinan, China 5School of Computer and Software, Nanjing University of Information Science and Technology 6School of Computer and Electronic Information/School of Artificial Intelligence, Nanjing Normal University EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using mathematical equations and descriptions, but it does not include a distinct section labeled "Pseudocode" or "Algorithm" with structured, code-like steps. |
| Open Source Code | Yes | All experiments were implemented using the Py Torch framework and executed on one NVIDIA Ge Force RTX 4060 GPU. The code of the paper has been open-sourced. |
| Open Datasets | Yes | The datasets encompass a diverse range of sources, including SCUT-FBP [Xie et al., 2015], Flicker-LDL [Yang et al., 2017a; Yang et al., 2017b], Movie [Geng and Hou, 2015], Emotion6 [Peng et al., 2015], Natural Scene [Geng, 2016], and RAF-ML [Li and Deng, 2019] |
| Dataset Splits | Yes | Specifically, we randomly split each dataset 10 times, allocating a substantial portion of 90% of the data to the combined training and validation sets. Within this 90%, we typically further subdivide the data into separate training and validation subsets to finetune our models and prevent overfitting. The remaining 10% of the data is reserved for the test set, serving as an unbiased evaluation of our model s performance on unseen data. |
| Hardware Specification | Yes | All experiments were implemented using the Py Torch framework and executed on one NVIDIA Ge Force RTX 4060 GPU. |
| Software Dependencies | No | The paper mentions "Py Torch framework" but does not specify a version number or any other software dependencies with version details. |
| Experiment Setup | Yes | The learning rate is set 0.001. The batch size is 50. The trade-off parameter α in Eq. (17) is 0.6, which is selected from parameter sensitivity analysis. The trade-off parameters λ, β and γ for alignment are set 0.1. The maximum epoch is 300. |