Trusted Multi-View Classification via Evolutionary Multi-View Fusion
Authors: Xinyan Liang, Pinhan Fu, Yuhua Qian, Qian Guo, Guoqing Liu
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate the effectiveness of this straightforward yet powerful strategy in mitigating imbalanced multi-view learning issues, particularly on complex many-view datasets exceeding three views. Extensive evaluations across 13 multi-view datasets validate the superior performance of our proposed method compared to other trusted multi-view learning approaches. |
| Researcher Affiliation | Academia | 1 Institute of Big Data Science and Industry, Key Laboratory of Evolutionary Science Intelligence of Shanxi Province, Shanxi University 2 Shanxi Key Laboratory of Big Data Analysis and Parallel Computing, School of Computer Science and Technology, Taiyuan University of Science and Technology EMAIL EMAIL |
| Pseudocode | Yes | Algorithm 1 Evolutionary NAS method... Algorithm 2 Algorithm for trusted multi-view classification |
| Open Source Code | Yes | The code is available at https://github.com/fupinhan123/TEF. |
| Open Datasets | Yes | Animals with Attributes (AWA) (Lampert et al., 2014): This dataset includes 30,475 images of 50 animal subjects with seven views. [...] NUS-WIDE-128 (NUS) (Tang et al., 2017): This dataset contains 43,800 single-label images from 128 categories. [...] Reuters (Amini et al., 2009): This is a multilingual multi-view dataset where each document is described by five different languages: English, French, German, Spanish, and Italian. |
| Dataset Splits | Yes | In the experiment, to avoid the randomness caused by data partitioning and network initialization, we adopted a 5-fold cross-validation strategy within the overall framework, dividing each dataset into training and testing sets. Notably, during the evolutionary search for the pseudo-view architecture, the training set was further split into a training set and a validation set to prevent data leakage. [...] As shown in the Figure 4, we first performed 5-fold cross-validation on the original dataset, dividing the data into training and testing sets in an 8:2 ratio. |
| Hardware Specification | Yes | All metrics were measured on a single P100 GPU. [...] Our computational environment was Ubuntu 16.04.4, with 512 GB DDR4 RDIMM, 2x 40-core Intel Xeon CPU E5-2698 v4 @ 2.20 GHz, and NVIDIA Tesla P100 (16 GB GPU memory). Using 7 NVIDIA Tesla P100 GPUs, the population size was set to 28. |
| Software Dependencies | No | The paper mentions training models using the Adam algorithm and specifies learning rates, decay rates, and epochs, but does not provide specific version numbers for software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used for implementation. |
| Experiment Setup | Yes | All DNN models were trained using the Adam algorithm. The learning rate was set to 0.001, the exponential decay rate for the first moment estimate was 0.9, and for the second moment estimate was 0.999. Each network was trained for 100 epochs. [...] To effectively utilize GPU resources, the population size was set as a multiple of the number of GPUs. Using 7 NVIDIA Tesla P100 GPUs, the population size was set to 28. Following (Shi et al., 2022), the number of generations was set to 20, with crossover and mutation probabilities set to 0.9 and 0.2, respectively. [...] The detailed hyperparameters used in the second stage of TEF for the six datasets are shown in Table 8. |