SyncNoise: Geometrically Consistent Noise Prediction for Instruction-based 3D Editing
Authors: Ruihuang Li, Liyi Chen, Zhengqiang Zhang, Varun Jampani, Vishal M. Patel, Lei Zhang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The paper includes dedicated sections for "Experiments", "Qualitative Results", "Quantitative Comparison", and "Ablation Study". It presents metrics in Table 1 and visual comparisons in various figures, clearly indicating an empirical study. |
| Researcher Affiliation | Collaboration | Authors are affiliated with academic institutions like "Hong Kong Polytechnic University" and "Johns Hopkins University", as well as industry organizations such as "OPPO Research Institute" and "Stability AI". |
| Pseudocode | No | The paper describes its methodology in prose and through mathematical equations. There are no explicit pseudocode blocks or algorithms presented. |
| Open Source Code | No | The paper provides a "Project Page https://lslrh.github.io/syncnoise.github.io/". However, this is a general project overview page and does not explicitly state that the source code for the described methodology is available there. |
| Open Datasets | No | The paper mentions evaluating methods on "a total of four scenes (i.e., bear, face, fangzhou and person)" but does not provide any specific links, DOIs, repository names, or formal citations for public access to these datasets. |
| Dataset Splits | No | The paper mentions using specific scenes for evaluation but does not provide any details regarding dataset splits (e.g., percentages, sample counts for training, validation, or testing). |
| Hardware Specification | No | The paper does not specify any particular hardware components such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper describes the methods and experiments but does not provide specific version numbers for any software dependencies or libraries used. |
| Experiment Setup | Yes | The paper specifies several experimental settings: "we first edit 80 multi-view images while enforcing consistency on the layer-5 and layer-8 of U-Net features", "pick the view with the highest CLIP direction score in every 10 adjacent views as the anchor view, and reproject them onto neighboring views with about 80% overlap", and training the "3D model for 1000-2000 iterations, depending on the complexity of the scenes". |