Going Beyond Consistency: Target-oriented Multi-view Graph Neural Network

Authors: Sujia Huang, Lele Fu, Shuman Zhuang, Yide Qiu, Bo Huang, Zhen Cui, Tong Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on three types of multi-view datasets validate the superiority of TGNN.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China 2School of Systems Science and Engineering, Sun Yat-Sen University, Guangzhou, China 3College of Computer and Data Science, Fuzhou University, Fuzhou, China 4School of Artificial Intelligence, Beijing Normal University, Beijing, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes The algorithm of TGNN is presented in Appendix B. Algorithm 1 TGNN Algorithm Input: Multi-view graphs {Gv}V v=1, initial features X, Labels Y Output: Predicted labels Y^ 1: Initialize Φc, {Φsv}V v=1, Θ, Ω 2: for epoch = 1 to MaxEpoch do 3: for each view v in {1, ..., V} do 4: Obtain Hsv, Hcv using Eq. (2) 5: end for 6: Fuse {Hcv}V v=1 to obtain C using Eq. (3) 7: Calculate Lv sha using Eq. (4) 8: for each view v in {1, ..., V} do 9: Calculate Lv sep using Eq. (6) 10: Calculate Lce using Eq. (7) 11: end for 12: Calculate total loss L = P V v=1 (Lce + Lv sep) + αLsha + α P V v=1 Lv KL 13: Update Φc, {Φsv}V v=1, Θ, Ω by minimizing L 14: end for 15: Return Y^
Open Source Code Yes Our code can refer to appendix. Complexity Analysis. ... Code and Appendix refer to https://github.com/huangsuj/TGNN.git.
Open Datasets Yes To evaluate the effectiveness of the proposed TGNN, we conduct comprehensive experiments on three types of multi-view datasets. These include three multi-relational datasets (ACM, DBLP, YELP), three multi-attribute datasets (Animals, HW, MNIST), and three multi-modal datasets (BDGP, esp-game, Flickr). A detailed description of these datasets is provided in Appendix C.1.
Dataset Splits Yes For node classification tasks on multi-relational graphs, 10% of the samples are used for validation, with the training set size varying across {20%, 40%}, and the remaining data is used for testing. For multi-attribute and multi-modal datasets, we split the data into training/testing/validation sets with a ratio of 10%/10%/80%.
Hardware Specification No No specific hardware details (like GPU models, CPU types, or cloud platforms) are mentioned in the paper's experimental setup.
Software Dependencies No The paper mentions "Adam optimizer" but does not specify version numbers for any key software components or libraries (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The parameters of TGNN are configured as below: the training epoch is 300, learning rate is 0.001, the hidden dimension is 512, the number of layers is 2, θ and α range in {0.1, 0.5, 0.7, 1, 1.3} and {0.001, 0.005, 0.01, 0.05, 0.1, 0.5}, respectively. The Adam optimizer is adopted with a weight decay of 5e 6 for the DBLP, Flickr, and HW datasets, and 5e 4 for the remaining datasets.