Deep Multi-modal Graph Clustering via Graph Transformer Network

Authors: Qianqian Wang, Haiming Xu, Zihao Zhang, Wei Feng, Quanxue Gao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of our algorithm. Experimental Analysis Experimental Setting Metrics and Databases: We evaluated our approach on four datasets: AMAP, WIKI, Citeseer, and Cora. These datasets vary in size and structure, with Cora and Citeseer containing citation networks, WIKI featuring co-occurrence relationships, and AMAP representing product networks. To enrich graph representation, we generated an additional attribute modality using the Fast Fourier Transform (FFT). Table 2 summarizes the dataset statistics. Experimental Results The experimental results of our multi-modal graph clustering method on Cora, Citeseer, WIKI and AMAP datasets show its excellent performance. Overall, our method outperforms the comparison algorithms in ACC, NMI, and ARI metrics on all datasets, which demonstrates the consistency and generalizability of our method. Ablation Experimental Analysis Ablation experiments on the loss function reveal the effects of reconstruction loss, graph structure loss, and KL divergence loss on model performance.
Researcher Affiliation Academia 1School of Telecommunications Engineering, Xidian University, Xi an, China 2School of Computer Science and Technology, Xi an Jiaotong University, Xi an, China 3 Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, Anhui University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using prose and mathematical equations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states: "The experimental environment was based on the Python platform using the Py Torch framework (Python version 3.10.13) and the MATLAB experimental simulation software to realize the proposed method and its comparison with existing data clustering methods." However, it does not provide any explicit statement about releasing the authors' own source code or a link to a repository.
Open Datasets Yes We evaluated our approach on four datasets: AMAP, WIKI, Citeseer, and Cora. These datasets vary in size and structure, with Cora and Citeseer containing citation networks, WIKI featuring co-occurrence relationships, and AMAP representing product networks. Table 2 summarizes the dataset statistics. AMAP (Liu et al. 2022) WIKI (Yang et al. 2015) CITESEER (Sen et al. 2008) CORA (Sen et al. 2008)
Dataset Splits No The paper mentions the datasets used but does not specify any training, testing, or validation splits, nor does it refer to predefined splits with citations for reproducibility of data partitioning.
Hardware Specification Yes In this study, all experiments were performed on a Windows server equipped with an NVIDIA Ge Force RTX 4090 graphics card with driver version 552.41 and CUDA version 12.4.
Software Dependencies No The experimental environment was based on the Python platform using the Py Torch framework (Python version 3.10.13) and the MATLAB experimental simulation software to realize the proposed method and its comparison with existing data clustering methods. While Python version 3.10.13 is specified, other key software like PyTorch and MATLAB are mentioned without specific version numbers. This is insufficient for reproducible software dependencies.
Experiment Setup Yes Parameter Analysis Regularization Parameters λ1 and λ2 In multi-modal graph clustering, the regularization parameters λ1 and λ2 are crucial for model performance, with their values ranging from 10 2 to 102. Experimental results show that the model performance is optimized when λ1 and λ2 are set to 0.1 and 1, respectively. Impact of Hyperparameter t The number of neighbor nodes t significantly affects the performance of graph convolutional networks. Influence of the Number of Network Layers The number of network layers has a significant impact on the clustering performance. Impact of Graph Smoothing Graph smoothing (e.g., Laplace smoothing) improves clustering performance by increasing the consistency of node features and reducing noise.