Bayesian Closed Surface Fitting Through Tensor Products

Authors: Olivier Binette, Debdeep Pati, David B. Dunson

JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analyzed the skull and Beethoven data shown in Fig. 1 using our proposed method. As all reasonable methods will do a good job at surface estimation based on a large number of points located very close to the surface of interest, we simulated different levels of sparse and noisy data by sampling a subset of the points in the original data sets and adding different levels of Gaussian measurement errors. ... We summarize the performances of the Crust algorithm and the tensor product approach in Table 1 for a variety of choices of the sample size and noise variance (σ2).
Researcher Affiliation Academia Olivier Binette EMAIL Department of Statistical Science Duke University Durham, NC 27708-0251, USA Debdeep Pati EMAIL Department of Statistics Texas A&M University College Station, TX 77843, USA David B. Dunson EMAIL Department of Statistical Science Duke University Durham, NC 27708-0251, USA
Pseudocode No The paper describes the Gibbs sampler steps in prose and mathematical equations in section 4.2 but does not provide a clearly labeled pseudocode or algorithm block. For example, 'The sampler cycles through the following steps. Step 1. Define X to be...'
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets No The paper mentions "the skull and Beethoven data shown in Fig. 1" but does not provide specific access information (e.g., URL, DOI, repository, or formal citation with authors/year) for these datasets to confirm their public availability or how to access them.
Dataset Splits No The paper states: "we simulated different levels of sparse and noisy data by sampling a subset of the points in the original data sets and adding different levels of Gaussian measurement errors. In many other applications, sparse and noisy data are routinely collected but focusing on two dense, low measurement error data sets allows careful study of the impact of sample size and measurement error on the performance of our proposed Bayesian approach relative to the state-of-the-art Crust algorithm. First we reconstruct the surface from non-noisy sparse data by taking random subsamples of 390 points from the skull and Beethoven point clouds." This describes how data subsets were created, but not standard train/test/validation splits needed for reproduction.
Hardware Specification No The paper does not provide any specific details regarding the hardware used to run the experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper describes the statistical methods and algorithms used (e.g., Gibbs sampler, MCMC) but does not list any specific software packages, libraries, or their version numbers (e.g., Python, PyTorch, R packages) that would be needed for replication.
Experiment Setup Yes In each case, we generated 5000 samples and discarded the first 2000 as burn-in. Convergence was monitored using trace plots of the deviance as well as several parameters. Also we get essentially identical posterior modes of n and m with different starting points and moderate changes to hyperparameters. In many applications, the features of the data acquisition device can dictate the amount of noise incorporated. Choosing an informative prior for the noise variance can help in the ability to pick up local features. The hyperparameters in the priors for τj and ξk play a key role in controlling the smoothness of the surface.