Personalized Clustering via Targeted Representation Learning
Authors: Xiwen Geng, Suyun Zhao, Yixin Yu, Borui Peng, Pan Du, Hong Chen, Cuiping Li, Mengdie Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, extensive results show that our method performs well across different clustering tasks and datasets, even when only a limited number of queries are available. Extensive experiments demonstrate that our method is effective in performing personalized clustering tasks. Furthermore, by strict mathematical reasoning, we verify the effectiveness of the proposed PCL method. To summarize, our main contributions are listed as follows: ... Extensive experiments show that our model outperforms five unsupervised clustering approaches, four semi-supervised clustering approaches on three image datasets. |
| Researcher Affiliation | Academia | 1Key Lab of Data Engineering and Knowledge Engineering of MOE Renmin University of China 2School of Information, Renmin University of China 3School of Statistics, Remin University of China EMAIL, EMAIL |
| Pseudocode | No | The full querying and clustering algorithm1 of the model is detailed in the Appendix.2. The appendix is not provided in the main text. |
| Open Source Code | Yes | 1Code is available at https://github.com/hhdxwen/PCL. |
| Open Datasets | Yes | We assessed our method using three widely-used image datasets: CIFAR-10, CIFAR-100 (Krizhevsky 2009), and Image Net-10 (Deng et al. 2009). |
| Dataset Splits | No | The paper states it differentiates between training and test sets and applies constraints only to the training set, and mentions reconstructing original datasets into artificial versions, but it does not provide specific proportions, sample counts, or a detailed methodology for how these splits were created or used, nor does it refer to standard splits with specific details beyond the dataset names themselves. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions using Res Net34 as the backbone network and Adam for optimization, but it does not specify any software dependencies with their version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Implementation Details. All methods used Res Net34 as the backbone network without modification. Parameters related to deep contrastive clustering followed previous methods (Li et al. 2021; Zhong et al. 2021). The batch size was 128, and Adam with an initial learning rate of 1e-5 was used for optimization. All images were resized to 128 128. The feature dimensionality M was set to 128. Hyperparameters were consistent across datasets with λ = 4 and ϵ = 0.2. Constraints were extended by labeling similar pairs with high confidence after several iterations. For semi-supervised and active constrained methods, 10k pairwise constraints were set on the three datasets. Our method used a training epoch E = 500 to determine final performance. |