Shape as Line Segments: Accurate and Flexible Implicit Surface Representation

Authors: Siyu Ren, Junhui Hou

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments, showcasing the significant advantages of our methods over state-of-the-art methods. The source code is available at https://github.com/rsy6318/SALS. [...] 4 EXPERIMENTS [...] 4.1 EVALUATION ON NEURAL IMPLICIT REPRESENTATION [...] 4.2 EVALUATION ON SURFACE RECONSTRUCTION FROM 3D POINT CLOUDS [...] 4.3 ABLATION STUDY
Researcher Affiliation Academia Siyu Ren Department of Computer Science City University of Hong Kong Hong Kong SAR, China EMAIL Junhui Hou Department of Computer Science City University of Hong Kong Hong Kong SAR, China EMAIL
Pseudocode No The paper describes methods and equations but does not include any clearly labeled pseudocode or algorithm blocks. For example, Section 3.2 'SURFACE EXTRACTION FROM LSFS VIA EDGE-BASED DUAL CONTOURING' describes a procedure using prose and mathematical formulas, not structured pseudocode.
Open Source Code Yes The source code is available at https://github.com/rsy6318/SALS.
Open Datasets Yes Datasets. We randomly selected 50 shapes from the ABC dataset (Koch et al., 2019) to conduct experiments. [...] Additionally, we utilized shapes from other commonly used datasets, including open-boundary clothes from the Deep Fashion3D dataset (Zhu et al., 2020), and complex shapes from the Famous dataset (Erler et al., 2020). [...] We employed a significantly smaller training set compared to previous methods, comprising 100 shapes selected from the Thingi10K dataset (Zhou & Jacobson, 2016) to train our network. For testing, we utilized the ABC and non-manifold ABC datasets, as well as more diverse shapes from the Deep Fashion3D (Zhu et al., 2020), Synthetic Rooms (Peng et al., 2020) and Waymo (Sun et al., 2020) datasets.
Dataset Splits Yes Datasets. We randomly selected 50 shapes from the ABC dataset (Koch et al., 2019) to conduct experiments. [...] We employed a significantly smaller training set compared to previous methods, comprising 100 shapes selected from the Thingi10K dataset (Zhou & Jacobson, 2016) to train our network. For testing, we utilized the ABC and non-manifold ABC datasets, as well as more diverse shapes from the Deep Fashion3D (Zhu et al., 2020), Synthetic Rooms (Peng et al., 2020) and Waymo (Sun et al., 2020) datasets.
Hardware Specification No The paper mentions 'GPU Mem.' in Table 3 but does not specify the model of the GPU or any other specific hardware components like CPU models or cloud computing instances used for the experiments. For example, '64 1.099s 1.55GB' for GPU Mem. does not identify the hardware.
Software Dependencies No The paper mentions optimizers like ADAMW but does not provide specific version numbers for any key software libraries or platforms (e.g., Python, PyTorch, TensorFlow, CUDA versions) that would be needed for replication.
Experiment Setup Yes Implementation Details. We employed an 8-layer MLP, with each layer comprising 512 neurons. All layers, except the final one, use the Softplus activation function (β = 100, as recommended in (Atzmon & Lipman, 2020)). The final layer consists of 2 neurons, followed by a Sigmoid activation function. We sampled 10 million line segments within the space to optimize the MLP. The model was trained for 100,000 epochs with a batch size of 10,000, using the ADAMW optimizer (Loshchilov, 2017) with an initial learning rate of 0.001. The learning rate was progressively adjusted using cosine annealing (Loshchilov & Hutter, 2016), with a minimum learning rate of 10 5. When extracting the surface, the resolution of grids used in E-DC was set to 128, keeping the same as the baseline methods.