Multi-Label Ranking Loss Minimization for Matrix Completion
Authors: Jiaxuan Li, Xiaoyan Zhu, Hongrui Wang, Yu Zhang, Xin Lai, Jiayin Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that MLRM outperforms the state-of-the-art matrix completion methods in varies of applications, including movie recommendation, drug-target interaction prediction and multi-label learning. 4 Experiments 4.1 Comparison Methods and Datasets The proposed method is applied in 3 types of matrix completion data, including movie recommendation (Movie Rec), drug-target interaction prediction (DTI) and multi-label learning (MLL). The statistics of benchmark datasets from different sources are listed in Tab. 2... 4.2 Experimental Results We compare MLRM with baseline algorithms in this section. To conduct fair experiments, the random partition of training and testing sets for each benchmark dataset is performed 10 times, and the average values and standard deviations are reported. The comparison results for Movie Rec, DTI and MLL are shown in Tab. 3 to 5. 4.4 Ablation Study In addition to multi-label ranking loss, MLRM introduces 2 manners to enhance the model performance, i.e., inductive learning on side information matrices and result correction. To verify the effectiveness of each part, an ablation study is conducted to compare MLRM with two baselines: B2: Removing the result correction based on MLRM; B1: Removing the inductive learning based on B2. The ablation study result is shown as Fig. 2, where the ablation study on DTI and MLL is illustrated with box plot, and the ablation study on Movie Rec is illustrated with the line chart of average value as only a single dataset with 6 sampling rates is revolved. |
| Researcher Affiliation | Academia | Jiaxuan Li, Xiaoyan Zhu *, Hongrui Wang, Yu Zhang, Xin Lai, Jiayin Wang School of Computer Science and Technology, Xi an Jiaotong University, Xi an, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Framework of MLRM. Input: Side matrices A, C, the matrix to be completed Y. Parameter: Hyper-parameters λ1, λ2. Output: M, N. 1: Calculate convert matrix L according to Eq. 11. 2: Calculate pairwise ranking matrix R = RΩ(Y )L. 3: Update right side information matrix B = LT C. 4: while not converged do 5: Ek R AN k BT ; 6: M k+1 Dλ1(Ek); 7: F k N k 1 Lp AT (M k+1 + AN k BT R)B; 8: N k+1 Sλ2(F k); 9: k k + 1; 10: end while 11: M M k, N N k; 12: return M, N. |
| Open Source Code | Yes | And more details of MLRM, including model implements, model convergence, convert matrix, supplementary experiments, can be referred to Codes and Appendix in https://github.com/Jiaxuan Good/MLRM.git |
| Open Datasets | Yes | The Movie Lens, DTI, MLL datasets can be publicly obtained in (Harper and Konstan 2015), (Yamanishi et al. 2008) and the website of KDIS (http://www.uco.es/kdis/mllresources/), respectively. |
| Dataset Splits | Yes | In each scenario, we set the proportion of the training data to ω% and take 5-fold cross validation to determine the hyper-parameters. In the experiments, ω = {10, 30, 50, 70, 90} is set to simulate different observation rates. |
| Hardware Specification | Yes | To ensure experimental fairness, all programs are executed on Intel(R) Core(TM) i7-13650HX CPU @ 2.60GHz. |
| Software Dependencies | No | The paper discusses the computational complexity of MLRM and comparison of running times but does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks like Python, PyTorch, TensorFlow, CUDA). |
| Experiment Setup | Yes | For MLRM, we grid search 10{−1, ,3} for hyper-parameters, and set λ1 = 102, λ2 = 103 for Movie Rec, λ1 = λ2 = 10 for MLL and DTI. Besides, for all methods, tolerance to iteration termination is 0.01, maximum number of iterations is 1000. |