Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
DiffVL: Scaling Up Soft Body Manipulation using Vision-Language Driven Differentiable Physics
Authors: Zhiao Huang, Feng Chen, Yewen Pu, Chunru Lin, Hao Su, Chuang Gan
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments In this section, we aim to justify the effectiveness of our vision-language task representation in guiding the differentiable physics solver. |
| Researcher Affiliation | Collaboration | Zhiao Huang Computer Science & Engineering University of California, San Diego EMAIL Feng Chen Institute for Interdisciplinary Information Sciences Tsinghua University EMAIL Yewen Pu Autodesk EMAIL Chunru Lin UMass Amherst EMAIL Hao Su University of California, San Diego EMAIL Chuang Gan MIT-IBM Watson AI Lab, UMass Amherst EMAIL |
| Pseudocode | No | The paper describes a 'domain-specific language (DSL)' and shows examples of 'optimization programs' in Figure 5 and Table 2, but it does not present a formal 'Pseudocode' or 'Algorithm' block for the overall method. |
| Open Source Code | No | 3both the GUI and dataset will be made public. (This is a future promise, not current concrete access to source code for the described methodology). |
| Open Datasets | No | Using our task annotation tool, we have created a dataset called Soft VL100, which consists of 100 tasks, and there are more than 4 stages on average. [...] 3both the GUI and dataset will be made public. (The dataset is created but concrete access information like a link, DOI, or specific citation for public availability is not provided). |
| Dataset Splits | No | The paper states 'We picked 20 representative task stages as our test bed from the Soft VL100' but does not provide specific percentages or counts for training, validation, and test splits needed for reproducibility. |
| Hardware Specification | Yes | For a single-stage task, it takes 10 minutes for 300 gradient descent steps on a machine with NVIDIA Ge Force RTX 2080, for optimizing a trajectory with 80 steps. |
| Software Dependencies | No | The paper mentions 'Py Torch [68]' and 'stable-baselines3' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | For differentiable physics solvers, we run Adam [39] optimization for 500 gradient steps using a learning rate of 0.02. [...] Table 3: Parameters for Reinforcement Learning |