Dreamweaver: Learning Compositional World Models from Pixels
Authors: Junyeob Baek, Yi-Fu Wu, Gautam Singh, Sungjin Ahn
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments, we demonstrate our model outperforms current state-of-the-art baselines for world modeling when evaluated under the DCI framework across multiple datasets. |
| Researcher Affiliation | Academia | Junyeob Baek KAIST Yi-Fu Wu Rutgers University Gautam Singh Rutgers University Sungjin Ahn KAIST, New York University |
| Pseudocode | No | The paper describes the architecture and methods in detailed text and mathematical equations (e.g., Section 2, Section 3) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | REPRODUCIBILITY STATEMENT We are committed to ensuring the reproducibility of our research. To this end, we intend to make all resources, including the code and datasets specifically designed for this work, publicly available. |
| Open Datasets | Yes | Moving-CLEVRTex, where objects have complex textures on them. In line with previous work (Singh et al., 2023; Wu et al., 2024), the highest level of visual complexity is in Moving-CLEVRTex... You can directly download through clevrtex_generation (Karazija et al., 2021) code (link). |
| Dataset Splits | No | The paper mentions custom datasets like Moving-Sprites, Moving-CLEVR, and Dancing-CLEVR, and also creating an OOD test set for Dancing-Sprites, but does not provide specific percentages, counts, or explicit instructions for train/test/validation splits for any dataset. |
| Hardware Specification | Yes | Each model was trained on NVIDIA GeForce RTX 4090 GPUs with 24GB of memory. |
| Software Dependencies | No | The paper states: "We trained all models using the Adam optimizer (Kingma & Ba, 2014) with β1 set to 0.9 and β2 set to 0.999." and refers to open-source resources for baselines (STEVE, Sys Binder, RSSM) but does not list specific version numbers for its own implementation's software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | Table 1: Hyperparameters of our model used in our experiments. We use a shortened version of the dataset name, omitting prefixes such as "Moving-" or "Dancing-" unless they are necessary to distinguish between datasets. Module Hyperparameter Moving-Sprites Dancing-Sprites CLEVR CLEVRTex General Batch Size 24 24 24 48 Training Steps 400K 400K 400K 400K Image Size 64 64 64 64 64 64 128 128 Context Length, T 2 3 2 2 Prediction Length, K 2 3 2 2 Grad Clip (norm) 0.5 0.5 0.5 0.5 RBSU # Iterations 3 3 3 3 # Slots 5 5 5 5 # Prototypes 64 64 64 128 # Blocks 8 8 8 8 Block Size 96 96 96 96 Learning Rate 0.00005 0.00005 0.00005 0.00005 Discrete VAE Patch Size 4 4 4 4 4 4 4 4 Vocabulary Size 4096 4096 4096 4096 Temp. Start 1.0 1.0 1.0 1.0 Temp. End 0.1 1.0 0.1 0.1 Temp. Decay Steps 60K 60K 60K 60K Learning Rate 0.0003 0.0003 0.0003 0.0003 Transformer Decoder # Layers 8 4 8 8 # Heads 4 4 4 4 Hidden Size 192 192 192 192 Dropout 0.1 0.1 0.1 0.1 Learning Rate 0.0003 0.0003 0.0003 0.0005 |