Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Revisiting CAD Model Generation by Learning Raster Sketch
Authors: Pu Li, Wenhao Zhang, Jianwei Guo, Jinglu Chen, Dong-Ming Yan
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results indicate that RECAD achieves strong performance in unconditional generation, while also demonstrating effectiveness in conditional generation and output editing. Extensive experiments demonstrate the superiority of RECAD in CAD generation through comprehensive comparisons with existing state-of-the-art approaches. |
| Researcher Affiliation | Academia | 1MAIS, Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3School of Artificial Intelligence, Beijing Normal University |
| Pseudocode | No | The paper describes the methodology using natural language and mathematical equations but does not include a clearly labeled pseudocode block or algorithm. |
| Open Source Code | No | The paper does not contain any explicit statement about making the source code available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We utilize the extensive Deep CAD dataset (Wu, Xiao, and Zheng 2021) |
| Dataset Splits | Yes | We generate 10,000 CAD models using each method and evaluate them against 2,500 ground-truth models randomly sampled from the reference test set in each run. |
| Hardware Specification | Yes | Models are implemented in Py Torch and trained on four NVIDIA RTX A6000 GPUs. |
| Software Dependencies | No | Models are implemented in Py Torch and trained on four NVIDIA RTX A6000 GPUs. We use an Adam W (Loshchilov and Hutter 2017) optimizer with the learning rate 5e-4 for optimization. The paper mentions Py Torch, Adam W optimizer, and PNDM, but does not specify their version numbers. |
| Experiment Setup | Yes | We use an Adam W (Loshchilov and Hutter 2017) optimizer with the learning rate 5e-4 for optimization. The sketch Image VAE is trained for 500 epochs at batch size 512 and two denoisers are trained for 2000 epochs at batch size 256. |