A Simple yet Effective Hypergraph Clustering Network

Authors: Qianqian Wang, Bowen Zhao, Zhengming Ding, Xiangdong Zhang, Quanxue Gao

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five benchmark datasets demonstrate HCN s superiority over state-of-the-art methods.
Researcher Affiliation Academia Qianqian Wang1 , Bowen Zhao1 , Zhengming Ding2 , Xiangdong Zhang1 and Quanxue Gao1 1School of Telecommunications Engineering, Xidian University, Xi an, China 2Department of Computer Science, Tulane University, New Orleans, LA EMAIL, bw EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the proposed method (HCN) using textual descriptions and mathematical formulations in Section 3, but it does not include a structured pseudocode block or algorithm figure.
Open Source Code No The paper does not contain any explicit statement about making the source code available or provide a link to a code repository for the methodology described.
Open Datasets Yes The experiments are conducted on five benchmark hypergraph datasets, including CORA [Sen et al., 2008], CITESEER [Sen et al., 2008], PUBMED [Sen et al., 2008], CORA-CA [Rossi and Ahmed, 2015], and 20NEWS [Dua and Graff, 2017].
Dataset Splits No The fused representation Z is then utilized as input to the k-means clustering to partition the nodes into multiple disjoint groups. To avoid the influence of randomness, the mean and standard deviation of these metrics are calculated over 10 independent runs for each method. The paper does not specify percentages or sample counts for training, validation, or test sets, as it is an unsupervised clustering task, nor does it define specific splits for the k-means evaluation.
Hardware Specification Yes The experiments are conducted on a system equipped with an Intel Core i9-13900K CPU, an NVIDIA Ge Force RTX 4090 GPU, and 64GB of RAM.
Software Dependencies No All experiments are implemented using the Pytorch framework, with a maximum training epoch limit of 400. No specific version numbers for PyTorch or other libraries are provided.
Experiment Setup Yes All experiments are implemented using the Pytorch framework, with a maximum training epoch limit of 400. The Adam optimizer [Kingma, 2014] is used to minimize the total loss, and the k-means algorithm is applied to the fused embeddings to obtain the final clustering results. Effect of hyper-parameter β. ... values of β, which are chosen from the set {0.01, 0.1, 1, 10, 100}. Effect of hyper-parameter α and t. ... α is selected from the set {0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5}, while t is chosen from {1, 2, 4, 8, 16}.