Unbounded: A Generative Infinite Game of Character Life Simulation

Authors: Jialu Li, Yuanzhen Li, Neal Wadhwa, Yael Pritch, David E. Jacobs, Michael Rubinstein, Mohit Bansal, Nataniel Ruiz

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our system through both qualitative and quantitative analysis, showing significant improvements in character life simulation, user instruction following, narrative coherence, and visual consistency for both characters and the environments compared to traditional related approaches.
Researcher Affiliation Collaboration 1Google 2The University of North Carolina at Chapel Hill
Pseudocode No The paper describes methods and architectures but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions "https://infinite-generative-game.github.io/" which is a project website. A visit to the website indicates "Code coming soon!", thus the code is not currently available.
Open Datasets No We collect an evaluation dataset consisting of 5,000 (character image, environment description, text prompt) triplets with GPT4o (Open AI, 2023).
Dataset Splits Yes We collect an evaluation dataset consisting of 5,000 (character image, environment description, text prompt) triplets with GPT4o (Open AI, 2023). It includes 5 characters (dog, cat, panda, witch, and wizard), 100 diverse environments, and 1,000 text prompts (10 per environment). ... We collect an additional evaluation dataset with 100 user-simulator interaction samples ... We distill the LLM using 5,000 user-simulator interaction samples collected from GPT-4o.
Hardware Specification Yes We train a Dream Booth Lo RA of rank 16 with batch size 1 and a constant learning rate 1e-4 for 500 steps on a single A100... We train the LLM for 6,500 steps, with batch size 8, distributed across 4 A100s, and learning rate 1e-4.
Software Dependencies No The paper mentions using models like SDXL and Gemma-2B as foundations but does not specify versions for other ancillary software dependencies like programming languages or libraries (e.g., Python, PyTorch versions).
Experiment Setup Yes We train a Dream Booth Lo RA of rank 16 with batch size 1 and a constant learning rate 1e-4 for 500 steps... The dynamic mask ratio r% in set to be 60%. ... We train the LLM for 6,500 steps, with batch size 8... and learning rate 1e-4. The learning rate scheduler is set to be cosine annealing (Loshchilov & Hutter, 2016), and the warmup steps ratio is 0.03.