Exploring a Principled Framework for Deep Subspace Clustering

Authors: Xianghan Meng, Zhiyuan Huang, Wei He, Xianbiao Qi, Rong Xiao, Chun-Guang Li

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on the synthetic data and six benchmark datasets to verify our theoretical findings and demonstrate the superior performance of our proposed deep subspace clustering approach.
Researcher Affiliation Collaboration Xianghan Meng , Zhiyuan Huang & Wei He Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China EMAIL Xianbiao Qi & Rong Xiao Intellifusion, Shenzhen, P.R. China Chun-Guang Li Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China EMAIL
Pseudocode Yes Algorithm 1 Scalable & Efficient Implementation of PRO-DSC via Differential Programming
Open Source Code Yes To ensure the reproducibility of our work, we have released the source code.
Open Datasets Yes All datasets used in our experiments are publicly available, and we have provided a comprehensive description of the data processing steps in Appendix B.1.
Dataset Splits Yes For all the datasets except for Image Net-Dogs, we train the network to implement PRO-DSC on the train set and test it on the test set to validate the generalization of the learned model. For Image Net-Dogs dataset which does not have a test set, we train the network to implement PRO-DSC on the train set and report the clustering performance on the training set. For a direct comparison, we conclude the basic information of these datasets in Table B.1.
Hardware Specification Yes all the experiments are conducted on a single NVIDIA RTX 3090 GPU and Intel Xeon Platinum 8255C CPU.
Software Dependencies No The paper mentions software like "SGD optimizer" and "scikit-learn" but does not specify their version numbers.
Experiment Setup Yes We train the network by the SGD optimizer with the learning rate set to η = 10 4, and the weight decay parameters of f( ; Ψ) and h( ; Ψ) are set to 10 4 and 5 10 3, respectively. We set α = d 0.1 nb for all the experiments. We summarize the hyper-parameters for training the network to implement PRO-DSC in Table B.2.