Position: You Can’t Manufacture a NeRF
Authors: Ma Kimmel, Mueed Ur Rehman, Yonatan Bisk, Gary K. Fedder
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate SOTA models in mesh generation as well as CAD reconstruction against the F360 segmentation dataset, and evaluate CAD reconstruction on the Thang3D dataset. We demonstrate that mesh alone, even at extremely high grid resolutions with noiseless inputs, is not precise or accurate enough for standard manufacturing techniques. We also demonstrate that SOTA CAD reconstruction similarly fails to reconstruct a mathematically valid object over 80% of the time on the complex datasets. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh PA, USA 2Electrical and Computer Engineering, Cornell University 3Language Technologies Institute, Carnegie Mellon University, Pittsburgh PA, USA. Correspondence to: Ani Kimmel <EMAIL>. |
| Pseudocode | No | The paper describes methodologies but does not contain any structured pseudocode or algorithm blocks. Figure 5 illustrates a 'Common model pipeline' but it is a diagram, not pseudocode. |
| Open Source Code | No | The paper does not explicitly state that source code is provided or made available, nor does it include any links to code repositories. |
| Open Datasets | Yes | Our evaluation dataset is built upon the Fusion360 (F360) Gallery Segmentation Dataset of roughly 35,000 parts as it has models incorporating advanced construction features such as fillets and chamfers and is also designed by humans. Even so, some CAD operations were suppressed for simplification (Lambourne et al., 2021). The dataset includes corresponding STEP and STL representations of each part, though more precise STL meshes can be generated. We sample from these mesh approximations to generate point clouds along with associated surface normals, then label each point as a segment according to their corresponding BREP face in the dataset. Additionally, we also evaluate on 200 models downloaded from the Thang3D online CAD file repository, which have no restrictions or simplifications. |
| Dataset Splits | No | The paper describes the datasets used (F360 segmentation dataset and Thang3D dataset) but does not specify how these datasets were split into training, validation, or test sets. It mentions sampling 10,000-point point clouds, but this is a sampling strategy, not a dataset split description. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions several models and algorithms, such as Instant NGP, Point2CAD, and an improved marching cubes algorithm, but does not provide specific version numbers for any software libraries, frameworks, or tools used in their implementation or experiments. |
| Experiment Setup | Yes | Using the F360 segmentation dataset, we allowed Instant NGP (M uller et al., 2022) to train until either the total loss is less than 0.0025 or 250,000 steps are reached, with the results shown in Table 1. The output resolution for every mesh is 256 x 256 x 256, which is greater than the resolution or maximum face number of any of the prior discrete generation models listed above. We remesh the output neural radiance field using an improved marching cubes algorithm, and all benchmarks are evaluated against these meshes. |