GSDiff: Synthesizing Vector Floorplans via Geometry-enhanced Structural Graph Generation

Authors: Sizhe Hu, Wenming Wu, Yuntao Wang, Benzhu Xu, Liping Zheng

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that the proposed method surpasses existing techniques, enabling free generation and constrained generation, marking a shift towards structure generation in architectural design. Experiments Our method is implemented using Pytorch and trained on an NVIDIA Ge Force GTX 4090 GPU. Qualitative Evaluation Unconstrained generation Unconstrained generation means that diverse floorplans can be generated without any inputs. Quantitative Evaluation Distribution comparison The distribution comparison is used to analyze the overall generation capability of a generative model by comparing the differences between the distributions of generated data and real data. Ablation Study We introduce two geometric enhancement strategies: one for alignment enhancement that optimizes the alignment error of nodes in mixed-base representations, and the other for perception enhancement that enhances the geometric perception ability of our edge prediction. To evaluate these two strategies, we have conducted a series of ablation experiments (Table 2).
Researcher Affiliation Academia Sizhe Hu, Wenming Wu*, Yuntao Wang, Benzhu Xu, Liping Zheng Hefei University of Technology EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methods and architectures in detail with figures and mathematical formulas, but it does not contain a specific block labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Code https://github.com/Sizhe Hu/GSDiff
Open Datasets Yes We have used the RPLAN dataset (Wu et al. 2019) for training and testing, which contains more than 80K residential floorplans with dense annotation. Our method is also evaluated on the LIFULL dataset (LIFULL Co. 2016).
Dataset Splits Yes The sample size for validing and testing is 3,000 each and the rest is used for training. For LIFULL dataset: Of these, 500 are used for validation, 500 for testing, and the remaining are used for training our unconstrained model.
Hardware Specification Yes Our method is implemented using Pytorch and trained on an NVIDIA Ge Force GTX 4090 GPU.
Software Dependencies No Our method is implemented using Pytorch and trained on an NVIDIA Ge Force GTX 4090 GPU. To ensure the quality of training at each stage, we train each network separately, using the Adam optimizer (Kingma 2014) with an initial learning rate of 1 10 4. The paper mentions software like Pytorch and Adam optimizer but does not specify their version numbers.
Experiment Setup Yes We train each network separately, using the Adam optimizer (Kingma 2014) with an initial learning rate of 1 10 4. We set σc = 1 and pflip = 0.01. We adopt time-related weighting scheme ω(t) (Chen et al. 2024), assigning higher weights at smaller time step t.