GraSP: Simple Yet Effective Graph Similarity Predictions

Authors: Haoran Zheng, Jieming Shi, Renchi Yang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, extensive experiments comparing GRASP against 10 competitors on multiple widely adopted benchmark datasets showcase the superiority of GRASP over prior arts in terms of both effectiveness and efficiency. We experimentally evaluate GRASP against 10 competitors over 4 real datasets in GED and MCS prediction tasks under various settings. The empirical results exhibit that GRASP can consistently achieve superior GED/MCS estimation performance over all the baselines while retaining high efficiency. Tables 1 and 2 report the overall effectiveness of all methods on all datasets for GED and MCS predictions, respectively. Ablation study.
Researcher Affiliation Academia Haoran Zheng1*, Jieming Shi2 , and Renchi Yang1 1Hong Kong Baptist University, Hong Kong, China 2The Hong Kong Polytechnic University, Hong Kong, China EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the architecture of GRASP using a diagram (Figure 2) and mathematical equations (Equations 1-9) to explain its components and processes. However, it does not include a clearly labeled pseudocode block or algorithm.
Open Source Code Yes Code https://github.com/Haoran Z99/Gra SP
Open Datasets Yes We conduct experiments on four real-world datasets, including AIDS700nef, LINUX, IMDBMulti (Bai et al. 2019) and PTC (Bai et al. 2020).
Dataset Splits Yes We split training, validation, and testing data with a ratio of 6:2:2 for all datasets and all methods by following the setting in (Bai et al. 2019) and (Bai et al. 2020).
Hardware Specification Yes All experiments are conducted on a Linux machine with a CPU Intel(R) Xeon(R) Gold 6226R CPU@2.90GHz, and GPU model NVIDIA Ge Force RTX 3090.
Software Dependencies No The paper mentions implementing the methods and using specific datasets but does not explicitly list any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes In our method, we use a search range of {w/o, 8, 16, 24, 32} for the step size k of the RWPE, {4, 6, 8, 10, 12} for the number of layers ℓof the GNN backbone, and {16, 32, 64, 128, 256} for the dimensionality d of the node hidden representations and also the final graph embedding. Our full hyperparameter settings on the four datasets can be found in the extended version.