TOTF: Missing-Aware Encoders for Clustering on Multi-View Incomplete Attributed Graphs
Authors: Mengyao Li, Xu Zhou, Jiapeng Zhang, Zhibang Yang, Cen Chen, Kenli Li
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate that our method achieves significant accuracy improvements across different levels of incompleteness and is less affected by incomplete attributes. The source code is available at https://anonymous.4open.science/r/TOTF-main. |
| Researcher Affiliation | Academia | Mengyao Li1 , Xu Zhou1 , Jiapeng Zhang1 and Zhibang Yang1 , Cen Chen2 , Kenli Li1 1Hunan University 2South China University of Technology EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: The TOTF Algorithm |
| Open Source Code | Yes | The source code is available at https://anonymous.4open.science/r/TOTF-main. |
| Open Datasets | Yes | We conduct experiments on five datasets-ACM [Lin and Kang2021], AMiner [Hong et al.2020], Cora, Citeseer, and Wiki [Fettal et al.2023] datasets. |
| Dataset Splits | No | The paper mentions using several datasets (ACM, AMiner, Cora, Citeseer, Wiki) and discusses varying 'missing rates' for attributes, but it does not specify how these datasets were split into training, validation, or test sets for experimental purposes. |
| Hardware Specification | No | The paper states: 'Details about datasets, hardware platform, and model parameters are shown in Appendix B.1.' However, Appendix B.1 is not included in the provided text, thus specific hardware details are not available. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as library names with version numbers, used to replicate the experiment. |
| Experiment Setup | No | The paper states: 'Details about datasets, hardware platform, and model parameters are shown in Appendix B.1.' However, Appendix B.1 is not included in the provided text. While parameter sensitivity for 'reminder rate h' and 'dimensional interaction coefficients λ1' are discussed, specific training hyperparameters like learning rate, batch size, or number of epochs are not provided in the main text. |