GeCC: Generalized Contrastive Clustering with Domain Shifts Modeling
Authors: Yujie Chen, Wenhui Wu, Le Ou-Yang, Ran Wang, Debby D. Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on four benchmark datasets demonstrate that our proposed method consistently outperforms other state-of-the-art methods. Extensive experiments on four benchmark datasets are performed to demonstrate the superiority of our proposed method. Elaborate ablation experiments demonstrate the effectiveness of each module in our method. |
| Researcher Affiliation | Academia | Yujie Chen1, Wenhui Wu 1, 2 *, Le Ou-Yang 1, 3 *, Ran Wang 4, Debby D. Wang 5 1 College of Electronics and Information Engineering, Shenzhen University, Shenzhen, 518060, China 2 Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen, 518060, China 3 Faculty of Engineering, Shenzhen MSU-BIT University, Shenzhen, 518116, China 4 College of Mathematics and Statistics, Shenzhen University, Shenzhen, 518060, China 5 School of Science and Technology, Hong Kong Metropolitan University, Hong Kong, China EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Details of CDSM and Algorithm 2: Training Procedures of Ge CC |
| Open Source Code | Yes | Code https://github.com/mia-7/Ge CC |
| Open Datasets | Yes | We evaluate the proposed method on four image datasets, whose details are shown in Table 1. CIFAR-10 consits of 10 categories and CIFAR-100 takes 20 super-classes as the label. Image Net-10 and Image Net-Dogs contains 10 randomly selected subjects and 15 types of dogs, respectively. |
| Dataset Splits | Yes | Datasets Size (n) Classes (c) Split CIFAR-10 60,000 10 Train+Test CIFAR-100 60,000 20 Train+Test Image Net-10 13,000 10 Train Image Net-Dogs 19,500 15 Train |
| Hardware Specification | No | The paper mentions using Res Net-34 and Res Net-18 as backbone encoders but does not provide specific hardware details such as GPU or CPU models used for experimentation. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not provide specific software dependency versions (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We apply Adam optimizer with learning rate of 0.0003 to simultaneously optimize the backbone encoder and two projectors. The size of mini-batch is set as 256. We train the network for E = 1000 epochs. The temperature for instance and cluster representation contrastive are fixed to 0.5 and 1.0 on all datasets, respectively. As for the predefined augmentation, we first resize all input images to the size of 224 224, then perform flip, color jitter, grayscale and Gaussian Blur in sequence. For small image datasets including CIFAR-10 and CIFAR-100, we leave out the Gaussian Blur augmentation. In this paper, λ1 and λ2 are set to 1 on all experimental datasets. |