Trusted Multi-View Classification with Expert Knowledge Constraints
Authors: Xinyan Liang, Shijie Wang, Yuhua Qian, Qian Guo, Liang Du, Bingbing Jiang, Tingjin Luo, Feijiang Li
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on three multiview datasets for sleep stage classification demonstrate that TMCEK achieves state-of-the-art performance while offering interpretability at both the feature and decision levels. |
| Researcher Affiliation | Academia | 1Institute of Big Data Science and Industry, Key Laboratory of Evolutionary Science Intelligence of Shanxi Province, Shanxi University, Taiyuan, China 2Shanxi Key Laboratory of Big Data Analysis and Parallel Computing, School of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan, China 3School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China 4College of Science, National University of Defense Technology, Changsha, China. |
| Pseudocode | No | The paper describes the proposed method and framework using detailed text and mathematical formulations (e.g., equations for Gabor function, uncertainty, and loss functions), but it does not present any explicitly labeled pseudocode or algorithm blocks. The overall architecture is shown in Fig. 1, and method details are described in sections 3.1 and 3.2 without using pseudocode. |
| Open Source Code | Yes | The code is available at https://github.com/ jie019/TMCEK_ICML2025. |
| Open Datasets | Yes | In this experiments, we use three public datasets including Sleep-EDF 20, Sleep-EDF 78 and Sleep Heart Health Study (SHHS) as shown in Appendix A.4. ... The Sleep-EDF dataset, sourced from Physio Bank (Goldberger et al., 2000), ... The SHHS dataset (Zhang et al., 2018), (Quan et al., 1997)... Multi-view Datasets. Hand Written1 ... Scene152 ... CUB3 ... PIE4 |
| Dataset Splits | Yes | We use per-subject 20-fold cross validation, dividing the subjects in each dataset into 20 groups. The recordings in one group were considered as test data, and the rest were used as training data. This process was repeated until all folds were iterated. ... In all datasets, 20% of the instances are allocated as the test set. The average performance is reported by running each test case five times. |
| Hardware Specification | Yes | The model is trained on Pytorch platform with a NVIDIA RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions training on 'Pytorch platform' and using 'Adam optimizer' but does not specify version numbers for PyTorch or any other software libraries used, such as Python, CUDA, or specific machine learning frameworks beyond the platform name. |
| Experiment Setup | Yes | We utilize fully connected networks with a Re LU layer to extract view-specific evidence. The Adam optimizer is used to train the network, where L2-norm regularization is set to 1e 5. We employ 5-fold cross-validation to select the learning rate from the options of 3e 3. ... We use the Adam optimizer with a batch size of 16 to train the proposed model, and the learning rate is initialized as 3.125e 5 (0.0005/batch size) and 6.25e 4 (0.01/batch size) in single-epoch network and multi-epoch network, respectively. In addition, the frequency f of the Gabor kernels was clamped between 0 to 35 Hz. |