COPER: Correlation-based Permutations for Multi-View Clustering
Authors: Ran Eisenberg, Jonathan Svirsky, Ofir Lindenbaum
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on ten multi-view clustering benchmark datasets provide empirical evidence for the effectiveness of the proposed model. |
| Researcher Affiliation | Academia | Ran Eisenberg , Jonathan Svirsky , Ofir Lindenbaum Faculty of Engineering Bar Ilan University Ramat Gan, 5290002, Israel EMAIL |
| Pseudocode | Yes | J COPER ALGORITHM Algorithm COPER |
| Open Source Code | No | We implement our model using Py Torch, and the code is available for public use 5. 5The code will be released at Github. |
| Open Datasets | Yes | We conduct extensive experiments with ten publicly available multi-view datasets used in recent works (Chen et al., 2023; Tang & Liu, 2022; Chao et al., 2024; Sun et al., 2024). The properties of the datasets are presented in Table B, and a complete description appears in Appendix B. |
| Dataset Splits | No | The paper lists several datasets in Appendix B, such as METABRIC (Curtis et al., 2012) and Reuters (Amini et al., 2009), along with their number of samples, classes, and views. However, it does not explicitly provide information on how these datasets were split into training, validation, or test sets for their experiments. |
| Hardware Specification | Yes | All experiments were conducted using an Nvidia A100 GPU server with Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz. |
| Software Dependencies | No | We implement our model using Py Torch, and the code is available for public use 5. All experiments were conducted using an Nvidia A100 GPU server with Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz. The training is done with Adam optimizer with learning rate 10 4 and its additional default parameters in Pytorch. |
| Experiment Setup | Yes | The training is done with Adam optimizer with learning rate 10 4 and its additional default parameters in Pytorch. To improve convergence stability, we add decoder modules defined for each view that reconstruct the original samples and are optimized jointly with the main model by adding mean squared error objective in addition to Lcorr. We train the model by gradually introducing additional loss terms during the training... We start with Lcorr loss and optionally with reconstruction loss Lmse. Next, after a few epochs, we add cross entropy loss Lce being minimized with predicted with pseudo labels. Finally, we introduce the within-cluster permutations, and the model is optimized with all loss terms during the next epochs. In order to tune the number of epochs for each step, we start with 100 epochs for the first step, 50 epochs for the second step, and a total of 1000 epochs for training in total. In addition, we set the k for the argtopk function to be the batch size divided by a number of clusters. We use a fixed cosine similarity threshold for all datasets 0.5. |