Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Budget-Aware Sequential Brick Assembly with Efficient Constraint Satisfaction

Authors: Seokjun Ahn, Jungtaek Kim, Minsu Cho, Jaesik Park

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that our method successfully generates a variety of brick structures and outperforms existing methods with Bayesian optimization, deep graph generative model, and reinforcement learning. We test our method on a completion task for sequential brick assembly where unseen partial structures are given. As shown in Table 2, our method outperforms the other three baseline methods in terms of Io U. We further conduct experiments on budget-aware scenarios. We analyze the components included in Br ECS by verifying each of them in completion or generation tasks, as presented in Tables 6, 7, and 8.
Researcher Affiliation Academia Seokjun Ahn EMAIL POSTECH Jungtaek Kim EMAIL University of Pittsburgh Minsu Cho EMAIL POSTECH Jaesik Park EMAIL Seoul National University
Pseudocode Yes Algorithm 1 Assembly of a single brick Input: Voxels of structure at current time step B, a list of assembled brick positions P, and brick size wb db Output: Position of a pivot brick (a, b, c), a relative position of the next brick (x, y, z) ... Algorithm 2 Model training for brick assembly Input: Dataset D, a batch size M, a sequence skipping value k Output: Model parameters for brick assembly θ
Open Source Code Yes 1The implementation of our method is available at https://github.com/joonahn/BrECS.
Open Datasets Yes Dataset. To generate ground-truth assembly sequences and the training pairs based on the ground-truth sequences, we use the Model Net40 dataset (Wu et al., 2015). In particular, the categories of airplane, table, and chair are used for assembly experiments.
Dataset Splits No The paper mentions using the 'Model Net40 dataset' and refers to a 'training dataset' and a 'test dataset' for specific categories (airplane, table, chair), but it does not specify explicit percentages, counts, or the methodology for how the ModelNet40 dataset was split into these training and testing sets. It only mentions converting 3D meshes into (64, 64, 64)-sized voxel grids and scaling them down.
Hardware Specification Yes We train our model on a server with four NVIDIA Ge Force RTX 2080 Ti GPUs.
Software Dependencies No The paper mentions using 'Adam optimizer (Kingma & Ba, 2015)' and 'Minkowski Engine (Choy et al., 2019)' but does not specify version numbers for these or any other software components.
Experiment Setup Yes We train our model with a fixed learning rate of 5e-4, Adam optimizer (Kingma & Ba, 2015), a batch size of 32, sequence skipping with a step size k = 8, a buffer size of 1024, and the maximum number of bricks of 150. The input size of our model is (64, 64, 64), and the output size is also (64, 64, 64). We train the model until reaching 100k steps. We train our model using the Adam optimizer with a learning rate of 0.0005 and a weight decay of 0.0. We set a batch size to 32, an internal buffer size to 1024, the number of steps to skip to 8, and a voxel size to 64.