PolyhedronNet: Representation Learning for Polyhedra with Surface-attributed Graph

Authors: Dazhou Yu, Genpei Zhang, Liang Zhao

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental evaluations on four distinct datasets, encompassing both classification and retrieval tasks, substantiate Polyhedron Net s efficacy in capturing comprehensive and informative representations of 3D polyhedral objects.
Researcher Affiliation Academia Dazhou Yu Genpei Zhang Liang Zhao Department of Computer Science, Emory University EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology in regular paragraph text and mathematical equations, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code and data are available at github.com/dyu62/3D_polyhedron.
Open Datasets Yes We employ the following datasets for both classification and retrieval tasks, detailed as follows: MNIST-C: This dataset contains 13,742 samples of digit polyhedra. We transform 2D polygon shapes from the MNIST-P dataset (Jiang et al., 2019) into 3D by stretching them along the z-axis. Building: Comprising 5,000 polyhedra, this dataset extends 2D polygons from the Open Street Map (OSM) building dataset (Yan et al., 2021) into 3D polyhedra. Shape Net-P: Derived from the Shape Net Core dataset (Chang et al., 2015), this dataset features 2,122 polyhedra across 15 object categories. Model Net-P: This dataset, based on Model Net40 (Wu et al., 2015a), contains 1,303 polyhedra spanning 14 object categories.
Dataset Splits Yes Each dataset is randomly split into 60%, 20%, and 20% for training, validation, and testing respectively.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It only implies that experiments were run on some computational resources.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and 'MLP function with batchnorm enabled' but does not provide specific version numbers for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes The learning rate is set to 0.001 across all tasks and models. The training batch and testing batch are set to 32 for the MNIST-C and Building datasets and 8 for the Shape Net-P and Model Net-P datasets. The downstream task model is a four-layer MLP function with batchnorm enabled for the classification task. All models are trained for a maximum of 500 epochs using an early stop scheme. The hyperparameters we tuned include hidden dimensions in 64,128,256,512,1024, and the number of GNN layers in 1,2,3,4,8. We found the best hyperparameters for different datasets are: MNIST-C: [256,4]; Building: [512,4]; Shape Net-P: [256,2]; Model Net-P: [128,2].