Deep Streaming View Clustering

Authors: Honglin Yuan, Xingfeng Li, Jian Dai, Xiaojian You, Yuan Sun, Zhenwen Ren

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that DSVC significantly outperforms state-of-the-art methods. Experimental results on eight datasets demonstrate that our proposed DSVC significantly outperforms 13 stateof-the-art MVC methods, highlighting its effectiveness in real-world streaming view scenarios.
Researcher Affiliation Academia 1School of National Defense Science and Technology, Southwest University of Science and Technology, Mianyang, China 2School of Computer Science and Technology,Southwest University of Science and Technology, Mianyang, China 3Southwest Automation Research Institute, Mianyang, China 4College of Computer Science, Sichuan University, Chengdu, China 5National Key Laboratory of Fundamental Algorithms and Models for Engineering Numerical Simulation, Sichuan University, Chengdu, China. Correspondence to: Yuan Sun <sunyuan EMAIL>, Zhenwen Ren <EMAIL>.
Pseudocode Yes A. DSVC Training Algorithm In this section, we present the main algorithmic process of DSVC, as shown in Algorithm 1. Algorithm 1 The algorithm of DSVC
Open Source Code No The paper does not contain any explicit statement about providing concrete access to source code for the methodology described.
Open Datasets Yes We design a series of experiments on eight widely used datasets, which encompass various data types, to demonstrate the effectiveness of our DSVC method. Detailed information for all datasets is provided in Tab.1. Concretely, ALOI-10 (Geusebroek et al., 2005) contains 1,079 samples across 10 categories... Hand Written (Le Cun et al., 1989) consists of 2,100 samples...
Dataset Splits No The paper does not provide explicit details on how datasets were split into training, validation, or test sets. It only mentions the use of datasets for experiments and training each view for a number of epochs.
Hardware Specification No Our DSVC is implemented in Py Torch 2.3.0, and all experiments are performed on a Linux system with an NVIDIA GPU and 32GB RAM.
Software Dependencies Yes Our DSVC is implemented in Py Torch 2.3.0, and all experiments are performed on a Linux system with an NVIDIA GPU and 32GB RAM.
Experiment Setup Yes For our DSVC, the autoencoder consists of a fully connected network. The encoder (decoder) network has the architecture of Dv 512 1024 512 256 (256 512 1024 512 Dv), where Dv represents the feature dimension of each view stream. In the experiments, we train each collected view for 200 epochs with batch size 256 and learning rate 0.0001. Additionally, we use Adam optimizer for model optimization and employ ReLU as the activation function. In DSVC, which included two adjustable parameters, i.e., α and β. For the Hand Written and Scene-15 datasets, we set α and β to 1 and 0.1, respectively. For the Stl10-fea dataset, α and β are set to 0.001 and 1. For all other datasets, we uniformly set α and β to 0.1. To comprehensively evaluate our clustering performance, we tested all methods using five different random seeds and calculated the mean and standard deviation as the final results.