Uncertainty-Aware Global-View Reconstruction for Multi-View Multi-Label Feature Selection
Authors: Pingting Hao, Kunpeng Liu, Wanfu Gao
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the superior performance of our method on multi-view datasets. ... Experiments Experimental Setup Datasets As shown in Table 1, we evaluate our method on six widely used multi-view multi-label datasets, namely yeast (Elisseeff and Weston 2001), SCENE (Chua et al. 2009), VOC07 (Everingham and Winn 2010), MIRFlickr (Huiskes and Lew 2008), IAPRTC12 (Escalante et al. 2010), and 3Sources (Greene and Cunningham 2009). ... Evaluation Metrics We evaluate the effectiveness of our method using four widely adopted metrics (Zhang and Zhou 2013; Gibaja and Ventura 2015), i.e., Average Precision (AP), Coverage, Hamming Loss (HL) and Ranking Loss (RL). ... Experimental Results Feature Selection Performance Table 2-3 shows the performance of UGRFS, compared with other methods. |
| Researcher Affiliation | Academia | Pingting Hao1,2 Kunpeng Liu3, Wanfu Gao1,2* 1 College of Computer Science and Technology, Jilin University, China 2 Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, China 3 Department of Computer Science, Portland State University, Portland, OR 97201 USA EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Uncertainty-aware Global-view Reconstruction Input: Data matrices {X(i)} V i=1; Label matrix Y . Parameter: Parameters α, β, γ and δ. Output: Selected features. 1: Initialize W (i), C(i), W (i) y ; 2: repeat 3: Update the matrix W (i) according to Formula (14); 4: Update the matrix C(i) according to Formula (15); 5: Update the matrix W (i) y according to Formula (16); 6: Update the objective function (13); 7: until Convergence 8: Obtain the ordered feature sequence by calculating W(j) 2 where j = 1, 2, 3, ..., d; 9: return Top ranked features as s-UGRFS-f. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that source code for the described methodology is publicly available. |
| Open Datasets | Yes | Experimental Setup Datasets As shown in Table 1, we evaluate our method on six widely used multi-view multi-label datasets, namely yeast (Elisseeff and Weston 2001), SCENE (Chua et al. 2009), VOC07 (Everingham and Winn 2010), MIRFlickr (Huiskes and Lew 2008), IAPRTC12 (Escalante et al. 2010), and 3Sources (Greene and Cunningham 2009). |
| Dataset Splits | Yes | The evaluation results are presented as mean accuracy along with standard deviation obtained from five-fold cross-validation. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | The parameters for each method are tuned within the range of {10^-3, 10^-2, ..., 10^3}. These methods adhere to the above standards to ensure the validity of the comparative results presented in this paper. ... where α, β, γ and δ are trade-off parameters to keep the balance of the model. |