GaussianPainter: Painting Point Cloud into 3D Gaussians with Normal Guidance
Authors: Jingqiu Zhou, Lue Fan, Xuesong Chen, Linjiang Huang, Si Liu, Hongsheng Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The paper includes sections like "5 Experiments", "5.1 Implementation Details", "5.2 Datasets, Evaluation, and Compared Methods", "5.3 Main Results", and "5.4 Ablation Study and Analysis". It quantitatively evaluates the proposed method using metrics such as PSNR, SSIM, and LPIPS, and compares it against baseline methods on established datasets like Omni Object3D and Objaverse. |
| Researcher Affiliation | Academia | The affiliations listed are "1Beihang University", "2Multimedia Laboratory, The Chinese University of Hong Kong", "3Chinese Academy of Sciences", and "4Centre for Perceptual and Interactive Intelligence". All of these are academic institutions or publicly funded research centers, indicating a purely academic affiliation. |
| Pseudocode | No | The paper describes its methodology in natural language and uses mathematical equations, but it does not contain any clearly labeled pseudocode blocks or algorithms. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing its own source code, nor does it provide a link to a code repository. It mentions a video in supplementary material and refers to code availability for other methods but not its own. |
| Open Datasets | Yes | The paper explicitly states and cites the datasets used: "Omni Object3D (Wu et al. 2023) is a 3D dataset with over 6000 objects in 197 categories..." and "Objaverse (Deitke et al. 2023) is a large-scale 3D dataset with more than 80k renderable 3D models for Blender." |
| Dataset Splits | Yes | For Omni Object3D, the paper states: "Within this subset, we sample a validation split for the evaluation of novel view synthesis, which contains two objects for each category." It also details the evaluation protocol: "For the evaluation of the i-th view, we randomly choose another view from the remaining K-1 views as the reference to paint the point cloud into Gaussians. Then the generated Gaussians are rendered into the i-the view to evaluate the quality of the i-th rendered view. The process above is repeated K times with a different reference view and rendering view each time..." |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models, memory specifications, or other computing resources. |
| Software Dependencies | No | The paper mentions software components like DINOv2 and ViT, and refers to UNet architectures, but it does not provide specific version numbers for any of these software libraries or tools. |
| Experiment Setup | No | While the paper describes architectural details, input/output dimensions, and loss functions (L1 loss and SSIM loss), it lacks specific hyperparameters such as learning rate, batch size, number of training epochs, or the specific optimizer used in the main text. |