On the Adversarial Robustness of Multi-Kernel Clustering
Authors: Hao Yu, Weixuan Liang, Ke Liang, Suyuan Liu, Meng Liu, Xinwang Liu
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations across seven datasets and eleven MKC methods (seven traditional and four robust) demonstrate Adv MKC s effectiveness, robustness, and transferability. |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, National University of Defence Technology, Changsha, China. Correspondence to: Xinwang Liu <EMAIL>. |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical equations, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The source code is publicly available at https://github. com/csyuhao/Adv MKC-Official. |
| Open Datasets | Yes | We assess the attack performance of Adv MKC on seven benchmark datasets: MSRCv1 (Winn & Jojic, 2005), BBCSport (Greene & Cunningham, 2006), Protein Fold (Damoulas & Girolami, 2008), HW-6Views (Huang et al., 2020), Caltech101-7 (Dueck & Frey, 2007), Citeseer (Giles et al., 1998), and NUS-WIDE-SCENE (Chua et al., 2009). |
| Dataset Splits | No | The paper describes perturbing a subset of samples for adversarial attack evaluation (e.g., 'we modify 10% of the dataset samples'), but does not provide specific train/test/validation splits for the clustering models themselves. Clustering typically evaluates on the entire dataset against ground truth labels rather than using distinct training and testing partitions in the supervised learning sense. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as CPU or GPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions various methods and models, but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | To ensure imperceptible adversarial perturbations, we modify 10% of the dataset samples, with 50% of the views being targeted by default. Following established practices (Chen et al., 2020), the perturbation magnitude is computed using the ℓ2 norm, with a default value of ϵ = 0.1d. ... The process is limited to T steps, defining the episode length of the adversarial attack. ... the discount factor γ, and the hyperparameter η ... α, β, and γ are balancing coefficients. ... α from {0.1, 0.2, 0.3, 0.4, 0.5, 0.6}, β from {1e 5, 1e 4, 1e 3}, and γ from {1e 5, 2e 5, 3e 5, 4e 5, 5e 5, 6e 5}. |