Charting the Design Space of Neural Graph Representations for Subgraph Matching

Authors: Vaibhav Raj, Indradyumna Roy, Ashwin Ramachandran, Soumen Chakrabarti, Abir De

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments reveal that judicious and hitherto-unexplored combinations of choices in this space lead to large performance benefits. Beyond better performance, our study uncovers valuable insights and establishes general design principles for neural graph representation and interaction, which may be of wider interest. Our code and datasets are publicly available at https://github.com/structlearning/neural-subm-design-space. In this section, we systematically evaluate different configurations across the five salient design axes on ten real datasets with diverse graph sizes.
Researcher Affiliation Academia Vaibhav Raj , Indradyumna Roy , Ashwin Ramachandran, Soumen Chakrabarti, Abir De Department of Computer Science and Engineering, IIT Bombay EMAIL ashwinramg @ucsd.edu
Pseudocode No The paper describes mathematical formulations and updates (e.g., Eq. 1, 2, 10-12) for its models and frameworks, but it does not present a distinct, structured pseudocode block or algorithm section labeled explicitly as such.
Open Source Code Yes Our code and datasets are publicly available at https://github.com/structlearning/neural-subm-design-space.
Open Datasets Yes Our code and datasets are publicly available at https://github.com/structlearning/neural-subm-design-space. Datasets We select ten real-world datasets from the TUDatasets repository (Morris et al., 2020), viz., AIDS, Mutag, PTC-FM (FM), NCI, MOLT, PTC-FR (FR), PTC-MM (MM), PTC-MR (MR), MCF and MSRC. Inspired by this approach, we extended our experiments to include three large-scale graphs drawn from the SNAP repository: 1. com-Amazon 2. email-Enron 3. roadnet-CA
Dataset Splits Yes We split query graphs Q = {Gq} into training, validation and test folds in the ratio 60:15:25.
Hardware Specification Yes All models were implemented with Py Torch 2.1.2 in Python 3.10.13. Experiments were run on Nvidia RTX A6000 (48 GB) GPUs.
Software Dependencies Yes All models were implemented with Py Torch 2.1.2 in Python 3.10.13.
Experiment Setup Yes The Adam optimizer is used to perform gradient descent on the ranking loss with a learning rate of 10-3 and a weight decay parameter of 5e-4. We use a batch size of 128, a margin of 0.5 for the ranking loss, and cap the number of training epochs to 1000. To prevent overfitting on the training dataset, we adopt early stopping with respect to the MAP score on the validation split, with a patience of 50 epochs & a tolerance of 10-4.