Targeted control of fast prototyping through domain-specific interface
Authors: Yu-Zhe Shi, Mingchen Liu, Hanlu Ma, Qiao Xu, Huamin Qu, Kun He, Lecheng Ruan, Qining Wang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Both machine-based evaluations and human studies on fast prototyping across various product design domains demonstrate the interface's potential to function as an auxiliary module for Large Language Models, enabling precise and effective targeted control of prototype models. |
| Researcher Affiliation | Academia | 1Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing, China 2Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR 3School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China 4Division of Emerging Interdisciplinary Areas, The Hong Kong University of Science and Technology, Hong Kong SAR. Correspondence to: Lecheng Ruan <EMAIL>, Qining Wang <EMAIL>. |
| Pseudocode | No | we devise the interface s operational mechanism and develop an algorithm for its automated domain specification. Both machine-based evaluations and human studies on fast prototyping across various product design domains demonstrate the interface s potential to function as an auxiliary module for Large Language Models, enabling precise and effective targeted control of prototype models. The paper describes an algorithm for its automated domain specification and references figures 3 and 4, but no structured pseudocode block is provided. |
| Open Source Code | Yes | The project page with supplementary files for reproducing the results of this paper will be available at https://autodsl.org/concept/papers/icml25shi.html. |
| Open Datasets | No | The paper mentions using "eight product design targets from major consumer market supercategories (Pei et al., 2011)" but does not provide specific links, DOIs, or repository names for these design targets or any collected data. It states, "our approach relies on designers on-the-fly instructions, making it impractical to prepare ground truth data (e.g., point clouds) beforehand." |
| Dataset Splits | No | The human study includes 50 participants, each holding at minimum a Bachelor s degree in industrial design or related fields... we limit the iteration count to ten rather than allowing unlimited iterations until the modeling becomes subjectively satisfying. The paper describes a human study protocol, but it does not specify train/test/validation splits for a machine learning dataset. |
| Hardware Specification | No | We leverage Open AI s GPT-4o API as the backbone LLM for both domain adaptation and runtime execution of the interface. No specific local hardware (e.g., GPU, CPU models) used for running the experiments is mentioned. |
| Software Dependencies | No | We leverage Open AI s GPT-4o API as the backbone LLM for both domain adaptation and runtime execution of the interface... parsing the Free CAD library s source code (hosted at https://wiki.freecad.org/Category:Developer_Documentation) using Doxygen. While GPT-4o is mentioned, specific version numbers for Free CAD or Doxygen are not provided. |
| Experiment Setup | Yes | We design a realistic fast prototyping scenario to simulate the targetedness-demanding design practices. Given the fundamental differences between fast prototyping and production-ready modeling (Hallgrimsson, 2012), we limit the iteration count to ten rather than allowing unlimited iterations until the modeling becomes subjectively satisfying. At the beginning, participants receive information about their target domain of product design. During each iteration, participants provide one natural language instruction to the interface and receive images of rendered models generated by our interface and alternative methods. Participants then rank these images based on how closely they match their intended effect. |