QDC: Quantum Diffusion Convolution Kernels on Graphs
Authors: Thomas Markovich
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through these studies, as well as experiments on a range of datasets, we observe that QDC improves predictive performance on the widely used benchmark datasets when compared to similar methods. |
| Researcher Affiliation | Industry | Thomas Markovich EMAIL Cash App Cambridge, Massachusetts, USA |
| Pseudocode | No | The paper describes methods and processes using mathematical equations and descriptive text, but it does not contain a clearly labeled pseudocode block or algorithm block. |
| Open Source Code | No | The paper mentions using open-source implementations for baselines (H2GCN, GDC, SDRF), but does not explicitly state that the authors' own code for QDC or Multi Scale QDC is made publicly available. |
| Open Datasets | Yes | We evaluated our method on 9 data sets: Cornell, Texas, and Wisconsin from the Web KB dataset; Chameleon and Squirrel from the Wiki dataset; Actor from the film dataset; and citation graphs Cora, Citeseer, and Pubmed. Where applicable, we use the same data splits as Pei et al. (2020). |
| Dataset Splits | Yes | Where applicable, we use the same data splits as Pei et al. (2020). Results are then averaged over all splits, and the average and standard deviation are reported. |
| Hardware Specification | Yes | All experiments were run using Py Torch Geometric 2.3.1 and Py Torch 1.13, and all computations were run on an Nvidia DGX A100 machine with 128 AMD Rome 7742 cores and 8 Nvidia A100 GPUs. |
| Software Dependencies | Yes | All experiments were run using Py Torch Geometric 2.3.1 and Py Torch 1.13, and all computations were run on an Nvidia DGX A100 machine with 128 AMD Rome 7742 cores and 8 Nvidia A100 GPUs. |
| Experiment Setup | Yes | We performed 250 steps of hyper-parameter optimization for each method, including baselines, and the hyper-parameter search was performed using Optuna, a popular hyper-parameter optimization framework. All tuning was performed on the validation set, and we report the test-results associated with the hyper-parameter settings that maximize the validation accuracy. The parameters, and the distributions from which they were drawn, are reported in Appendix A.4. All training runs were run with a maximum of 1000 steps for each split, with early stopping turned on after 50 steps. |