Pygmtools: A Python Graph Matching Toolkit

Authors: Runzhong Wang, Ziao Guo, Wenzheng Pan, Jiale Ma, Yikai Zhang, Nan Yang, Qi Liu, Longxuan Wei, Hanxue Zhang, Chang Liu, Zetian Jiang, Xiaokang Yang, Junchi Yan

JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Table 1 also provides insights into the relative running times of pygmtools (Py Torch backend) compared to ZAC GM (Matlab), using RRWM on graphs containing 50 nodes. Additionally, it is compared to multiway (Matlab) with GAMGM on 10 graphs comprising 250 nodes. [...] Several examples and notebooks are offered with pygmtools to presents typical applications of GM. GM solvers are illustrated on matching synthetic graphs, and representative real-world applications such as matching images (Zanfir and Sminchisescu, 2018; Wang et al., 2023a,b) and fusing deep neural networks (Liu et al., 2022).
Researcher Affiliation Academia Runzhong Wang EMAIL Ziao Guo EMAIL Wenzheng Pan EMAIL Jiale Ma EMAIL Yikai Zhang EMAIL Nan Yang EMAIL Qi Liu EMAIL Longxuan Wei EMAIL Hanxue Zhang EMAIL Chang Liu EMAIL Zetian Jiang EMAIL Xiaokang Yang EMAIL Junchi Yan EMAIL CSE Department and Mo E Key Lab of AI, Shanghai Jiao Tong University, Shanghai, 200240, China
Pseudocode No The paper provides Python code snippets for using the pygmtools library, for example, on page 4, showing how to build an affinity matrix and solve a GM problem. However, these are not presented as structured pseudocode or algorithm blocks describing the underlying methods themselves.
Open Source Code Yes pygmtools is open-sourced under Mulan PSL v2 license. [...] Under the Mulan PSL v2 license, our toolkit relies solely on open-source libraries.
Open Datasets No The paper mentions illustrating GM solvers on 'matching synthetic graphs' and 'representative real-world applications such as matching images' (referencing other works), but does not provide concrete access information (link, DOI, repository, or formal citation for the dataset used for *their* evaluation) for any specific publicly available or open dataset.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for any experiments conducted within this paper.
Hardware Specification No The paper mentions 'GPU support' in Table 1 and that 'users can configure other backends with a better support of GPU and deep learning', but it does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments or benchmarks.
Software Dependencies No The paper mentions several numerical backends such as 'Numpy, Py Torch, Jittor, Paddle, Tensor Flow, and Mind Spore'. However, it does not specify their version numbers or any other software dependencies with version information, which is required for reproducibility.
Experiment Setup No The paper provides an example of how to use the toolkit, including a parameter 'sigma =1.' for an affinity function. However, it does not detail specific experimental setup parameters such as hyperparameters, optimizer settings, training configurations, or system-level settings for evaluating the toolkit's performance or any particular model's training in the main text.