Symmetry-Robust 3D Orientation Estimation

Authors: Christopher Scarvelis, David Benhaim, Paul Zhang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We introduce a two-stage orientation pipeline that achieves state of the art performance on up-axis estimation and further demonstrate its efficacy on full-orientation estimation... We train and evaluate our model on the entire Shapenet dataset... Section 4. Experiments: We now evaluate our method s performance on orientation estimation. We first follow the evaluation procedure in Pang et al. (2022) and benchmark against their Upright-Net... We depict the results of this benchmark in Table 1. Our method improves on Upright-Net s up-axis estimation accuracy by nearly 20 percentage points...
Researcher Affiliation Collaboration 1MIT CSAIL, Cambridge, MA 2Backflip AI, San Francisco, CA. Correspondence to: Christopher Scarvelis <EMAIL>.
Pseudocode No The paper describes methods like the two-stage orientation pipeline, quotient orienter, and flipper, but it does so using descriptive text and mathematical formulations. There are no clearly labeled algorithm blocks or pseudocode sections in the paper.
Open Source Code Yes Our contributions include the following: (5) we release our code and model weights to share our work with the ML community.
Open Datasets Yes We train and evaluate all models on Shapenet (Chang et al., 2015), as this is the largest and most diverse dataset we are aware of containing canonically oriented shapes... We further demonstrate its generalization capabilities on Objaverse, a large dataset of real-world 3D models of varying quality.
Dataset Splits Yes We construct a random 90-10 train-test split of Shapenet, draw 10k point samples from the surface of each mesh, and train our quotient orienter and flipper on all classes in the training split.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. It only generally mentions 'compute' in the acknowledgements section.
Software Dependencies No We parametrize our quotient orienter by a DGCNN and use the author s Pytorch implementation (Wang et al., 2019)... we use the roma package (Br egier, 2021) to efficiently compute this projection. The paper mentions PyTorch and the roma package but does not provide specific version numbers for these software components.
Experiment Setup Yes We train our quotient orienter for 1919 epochs and our flipper for 3719 epochs, sampling 2k points per point cloud at each iteration and fixing a learning rate of 10 4. We parametrize our quotient orienter by a DGCNN... with 1024-dimensional embeddings, k = 20 neighbors for the Edge Conv layers, and a dropout probability of 0.5... pass batches of 48 point clouds per iteration.