Variational Autoencoding of Dental Point Clouds

Authors: Johan Ziruo Ye, Thomas Ørkild, Peter Lempel Søndergard, Søren Hauberg

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on a large dataset of intraoral scans, demonstrating its ability to generate diverse and high-quality dental shapes and to interpolate smoothly between different tooth morphologies. Quantitative and qualitative results show that our model outperforms baseline methods in terms of reconstruction accuracy, generation diversity, and interpolation quality.
Researcher Affiliation Academia 1. Technical University of Munich, Germany 2. University of Fribourg, Switzerland 3. University Hospital Regensburg, Germany
Pseudocode No The paper describes the methodology using textual explanations and architectural diagrams (Figures 1 and 2) rather than structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about making the source code available, nor does it provide a link to a code repository.
Open Datasets No Our dataset consists of 5000 3D intraoral scans of individual teeth, obtained from a dental clinic. Each scan is represented as a point cloud with approximately 10,000 points. The dataset is diverse, covering a wide range of tooth types and morphological variations. The paper describes the dataset but does not provide access information.
Dataset Splits Yes We split the dataset into 4000 training, 500 validation, and 500 test samples.
Hardware Specification Yes Our models are implemented in PyTorch and trained on a single NVIDIA RTX 3090 GPU.
Software Dependencies No Our models are implemented in PyTorch and trained on a single NVIDIA RTX 3090 GPU. Only PyTorch is mentioned without a specific version number.
Experiment Setup Yes The VAE models were trained for 500 epochs using the Adam optimizer with a learning rate of 0.001, β1 = 0.9, and β2 = 0.999. The batch size was set to 32. The latent space dimension was set to 128. For the Chamfer distance, we used a weighting factor of 1000 for point-to-point correspondence.