Contrastive Graph Autoencoder for Shape-based Polygon Retrieval from Large Geometry Datasets

Authors: Zexian Huang, Kourosh Khoshelham, Martin Tomko

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we demonstrate this capability based on template query shapes on real-world datasets and show its high robustness to geometric transformations in contrast to existing GAEs, indicating the strong generalizability and versatility of CGAE, including on complex real-world building footprints.
Researcher Affiliation Academia Zexian Huang, Kourosh Khoshelham & Martin Tomko The University of Melbourne, Parkville, Victoria, 3010, Australia {zexianh@student., k.khoshelham@, tomkom@}unimelb.edu.au
Pseudocode Yes We depict the algorithmic sequence of CGAE and its relationship with the Equations noted in the main paper in Fig. 6.
Open Source Code Yes Source code for method implementation and datasets for reproducing experiment results is available at https://github.com/zexhuang/CGAE.
Open Datasets Yes Source code for method implementation and datasets for reproducing experiment results is available at https://github.com/zexhuang/CGAE. OSM Planet dump, 2023. URL https://planet.osm.org. City of Melbourne 2020 building footprints, May 2021. URL https://data.melbourne.vic.gov.au/explore/dataset/2020-building-footprints/information/.
Dataset Splits Yes The four Glyph datasets are combined and divided into a training/validation/test set (60 : 20 : 20).
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or cloud environment specifications used for running the experiments.
Software Dependencies No The paper mentions the use of the Adam optimizer and a cosine annealing schedule but does not provide specific version numbers for software libraries or frameworks used in the implementation.
Experiment Setup Yes We train all models for 100 epochs with the Adam optimizer (Kingma & Ba, 2015) and an initial learning rate of 0.0001. We set the training batch size b = 32 and apply the same batch size to contrastive loss in CGAE. We set the augmentation ratio r to 20% for both random node dropping and edge perturbation in graph augmentation.