Hyperspherical Prototype Node Clustering

Authors: Jitao Lu, Danyang Wu, Feiping Nie, Rong Wang, Xuelong Li

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on popular benchmark datasets demonstrate the superiority of our method compared to other state-of-the-art clustering methods, and visualization results illustrate improved separability of the learned embeddings. ... In this section, we conduct extensive experiments to evaluate the effectiveness of the proposed HPNC paradigm.
Researcher Affiliation Academia Jitao Lu EMAIL School of Computer Science, School of Artificial Intelligence, OPtics and Electro Nics (i OPEN), Northwestern Polytechnical University Danyang Wu EMAIL State Key Laboratory for Manufacturing Systems Engineering, School of Electronic and Information Engineering, Xi an Jiaotong University Feiping Nie EMAIL School of Artificial Intelligence, OPtics and Electro Nics (i OPEN), Northwestern Polytechnical University Rong Wang EMAIL School of Artificial Intelligence, OPtics and Electro Nics (i OPEN), Northwestern Polytechnical University Xuelong Li EMAIL School of Artificial Intelligence, OPtics and Electro Nics (i OPEN), Northwestern Polytechnical University
Pseudocode No The paper describes the methodology in detail with equations and textual descriptions but does not include any clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No Detailed hyperparameter settings can be found in YAML configuration files from our code. However, no direct link to a public repository is provided, nor is there an explicit statement of code release.
Open Datasets Yes We evaluate the clustering performance on five widely adopted attributed graph datasets: Cora, Cite Seer, Pub Med (Yang et al., 2016)3, ACM and DBLP (Bo et al., 2020)4. ... 3https://github.com/kimiyoung/planetoid 4https://github.com/bdy9527/SDCN
Dataset Splits No The paper uses widely adopted benchmark datasets but does not explicitly state the specific training/test/validation splits used for the experiments. It mentions 'We run all competitors five times with distinct random seeds and report the averaged results and standard deviations of the best epochs' but not the data splitting methodology.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to conduct the experiments.
Software Dependencies Yes We implement our proposed HPNC in Py Torch 2.0 and Py G 2.3.0 and use Graph Gym (You et al., 2020) for experiment management to ensure reproducibility.
Experiment Setup Yes Throughout the experiments, the encoders of HPNC are all composed of two GAT layers with 4 128d attention heads and dropout probability 0.1 for attention coefficients. We also apply dropout with probability 0.2 between the GAT layers and use PRe LU as the activation function. The decoder is a single GAT layer without non-linear activation. The coefficients α, β and γ in Eqs. (10) and (14) are tuned by random search within the following ranges: α {0.0, 0.01, 0.02}, β, γ (0.0, 0.1].