Deep Incomplete Multi-view Learning via Cyclic Permutation of VAEs
Authors: Xin Gao, Jian Pu
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our approach on seven diverse datasets with varying missing ratios, achieving superior performance in multi-view clustering and generation tasks. |
| Researcher Affiliation | Academia | Xin Gao Fudan University EMAIL Jian Pu B Fudan University EMAIL |
| Pseudocode | Yes | Algorithm 1 Sattolo s Algorithm for Cyclic Permutation |
| Open Source Code | Yes | For more implementation details, please refer to the code provided in the supplementary material. |
| Open Datasets | Yes | We extensively evaluated the proposed method across seven diverse multi-view datasets, summarized in Table 1. These datasets encompass a variety of view types with different dimensions, originating from diverse sensors or descriptors, as well as real-world perspectives captured from different angles. Poly MNIST (Sutter et al., 2021) consists of five images per data point, all sharing the same digit label but varying in handwriting style and background. Shape Net is a large-scale repository of 3D CAD models of objects (Chang et al., 2015). |
| Dataset Splits | Yes | For the Poly MNIST dataset, ... We use the original split with 60K tuples for training and 10K for testing, All models are trained on incomplete observations (η = 0.5), with 50% of samples having 1 to 4 views missing. ... We use an 80:20 train-test split and apply the same experimental settings as in Section 4.2.1. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used for running its experiments, such as GPU models, CPU types, or other detailed computer specifications. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the implementation. |
| Experiment Setup | Yes | The best performance is achieved when β1 = 5.0 and β2 = 2.5, resulting in a clustering accuracy of 90.76. ... For the experiments in Section 4.1, we employed fully connected neural networks similar to those used in previous studies. The choice of architecture depends on the dataset s characteristics, such as input dimension and number of samples. We used one of the following network configurations: dv 256 256 1024 d ... All methods were trained for 300 epochs across all datasets. |