Structure-Adaptive Multi-View Graph Clustering for Remote Sensing Data
Authors: Renxiang Guan, Wenxuan Tu, Siwei Wang, Jiyuan Liu, Dayu Hu, Chang Tang, Yu Feng, Junhong Li, Baili Xiao, Xinwang Liu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct extensive experiments on four benchmarks and achieve promising results, well demonstrating the effectiveness and superiority of the proposed method. |
| Researcher Affiliation | Collaboration | 1College of Computer Science and Technology, National University of Defense Technology, Changsha, China 2College of Computer Science and Technology, Hainan University, Haikou, China 3Intelligent Game and Decision Lab, Beijing, China 4College of Systems Engineering, National University of Defense Technology, Changsha, China 5School of Computer Science, China University of Geosciences, Wuhan, China |
| Pseudocode | Yes | Algorithm 1: The learning procedure of SAMVGC |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | Four open source datasets of remote sensing data are used in the experiments, including the Trento, MUULF, Augsburg, and MDAS datasets. |
| Dataset Splits | No | The paper does not explicitly provide details about specific training/test/validation dataset splits, proportions, or methodologies for partitioning the data. |
| Hardware Specification | Yes | To ensure an equitable comparison between the proposed SAMVGC and these baselines, we compute the average results from ten iterative runs under identical experimental conditions, utilizing a 24GB RTX 3090 GPU and 64GB of RAM. |
| Software Dependencies | No | The paper mentions using a three-layer GCN as the encoder and decoder but does not provide specific version numbers for any software libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | We set λ1 and λ2 to 100 and 500 respectively making the three losses on the same scale. And ip = 10 is used in our model. For the baselines, we use the optimal parameters reported in the papers to derive the final results. We use a three-layer GCN as the encoder and decoder of the graph, with the number of hidden layers and output layers being 128, 256 and 512 respectively. |