Beyond Canonicalization: How Tensorial Messages Improve Equivariant Message Passing
Authors: Peter Lippmann, Gerrit Gerhartz, Roman Remme, Fred A Hamprecht
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the superiority of tensorial messages and achieve state-of-the-art results on normal vector regression and competitive results on other standard 3D point cloud tasks. |
| Researcher Affiliation | Academia | Peter Lippmann , Gerrit Gerhartz , Roman Remme & Fred A. Hamprecht Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, 69120 Heidelberg, Germany EMAIL EMAIL |
| Pseudocode | No | The paper describes methods using mathematical equations and structured steps, but there are no explicitly labeled pseudocode or algorithm blocks. For example, the message passing steps are given as equations (Eq. 11, 12, 26, 27, 29, 31, 32, 33) within the text. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing code or links to a code repository. It does not mention code in supplementary materials. |
| Open Datasets | Yes | We have trained different variants of our Point Net++ adaptation on normal vector regression and classification on the Model Net40 dataset (Wu et al., 2015) and on segmentation on the Shape Net dataset (Yi et al., 2016). Model Net40 consists of 12,311 3D shapes of 40 different categories. We use the resampled version of the dataset for which normal vectors at all points are available and use the default train/test split. The Shape Net dataset consists of around 17,000 3D point clouds (including normal vectors) from 16 shape categories, annotated with 50 semantic classes for segmentation. [...] The normal vector regression and the classification experiments are conducted on the Model Net40 dataset (Wu et al., 2015). In particular, we use the resampled version available at https://shapenet.cs.stanford.edu/media/modelnet40_normal_ resampled.zip, which includes normal vectors for each point in the point cloud. The segmentation experiments are conducted on the Shape Net dataset (Yi et al., 2016). |
| Dataset Splits | Yes | Model Net40 consists of 12,311 3D shapes of 40 different categories. We use the resampled version of the dataset for which normal vectors at all points are available and use the default train/test split. The Shape Net dataset consists of around 17,000 3D point clouds (including normal vectors) from 16 shape categories, annotated with 50 semantic classes for segmentation. [...] The normal vector regression and the classification experiments are conducted on the Model Net40 dataset (Wu et al., 2015). In particular, we use the resampled version available at https://shapenet.cs.stanford.edu/media/modelnet40_normal_ resampled.zip, which includes normal vectors for each point in the point cloud. We use the first 1024 points based on the ordering provided in this version of the dataset and normalize the point clouds to fit in the unit sphere. The ordering is based on furthest point sampling to evenly cover the surface of the 3D shapes. |
| Hardware Specification | Yes | The training of the equivariant learned frames + refining frames + tensor messages model for the normal vector regression task took 46h on a single NVIDIA A100 GPU (CPU: 2 x 32-Core Epyc 7452). The training of the data-augmented model took 19h on the same machine. The best-performing equivariant classification model (learned frames + refining frames + tensor messages) was trained for 20h on a single Quadro RTX 6000 (CPU: 2 x 32-Core Epyc 7452) and the data-augmented version for 15h. The equivariant segmentation model (learned frames + refining frames + tensor messages) was trained for 39h on a single NVIDIA A100 GPU (CPU: 2 x 32-Core Epyc 7452) and the data augmented version for 17h. |
| Software Dependencies | No | The paper does not explicitly state specific version numbers for software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | The hyperparameters chosen for our two main experiments are listed in Tab. 7. Table 7: Hyperparameter choices. The main hyperparameter choices for our models in the classification and normal vector regression task. Label smoothing only applies to the classification and segmentation models. For the classification task on Scan Object NN we only trained for 500 epochs. normal vector regression classification segmentation optimizer Adam W Adam W Adam W weight decay 5e-4 0.05 1e-3 learning rate 2.5e-3 1e-3 0.05 scheduler Cosine-LR Cosine-LR Cosine-LR epochs 800 800 / 500 800 warm up epochs 10 10 10 gradient clip 0.5 0.5 0.5 label smoothing N.A. 0.3 0.3 loss L1-loss Cross-Entropy Cross-Entropy |