Differentiable and Learnable Wireless Simulation with Geometric Transformers

Authors: Thomas Hehn, Markus Peschl, Tribhuvanesh Orekondy, Arash Behboodi, Johann Brehmer

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on a range of tasks (i. e., signal strength and delay spread prediction, receiver localization, and geometry reconstruction) and find that Wi-GATr is accurate, fast, sample-efficient, and robust to symmetry-induced transformations. Remarkably, we find our results also translate well to the real world: Wi-GATr demonstrates more than 35% lower error than hybrid techniques, and 70% lower error than a calibrated wireless tracer.
Researcher Affiliation Industry Thomas Hehn, Markus Peschl, Tribhuvanesh Orekondy, Arash Behboodi, Johann Brehmer Qualcomm AI Research
Pseudocode Yes Algorithm 1 Diffusion Wi-GATr Training Algorithm 2 Diffusion Wi-GATr Sampling
Open Source Code Yes Our Wi-GATr code is available at https://github.com/Qualcomm-AI-research/Wi-GATr.
Open Datasets Yes Therefore, we generate two datasets that feature indoor scenes and channel information at a frequency of 3.5 GHz using Wireless In Site, a state-of-the-art ray-tracing simulator (Remcom).3 We focus on indoor scenes as transmission plays a stronger role than outdoors. The datasets provide detailed characteristics for each path between Tx and Rx, such as gain, delay, angle of departure and arrival at Tx/Rx, and the electric field at the receiver itself, which allows users to compute various quantities of interest themselves. See Appendix D for more details. 3The datasets are available at https://github.com/Qualcomm-AI-research/Wi In Sim.
Dataset Splits Yes We take 5000 layouts from Wi3Rooms (Orekondy et al., 2022b) and randomly sample 3D Tx positions and Rx positions. In Appendix D we provide more details and define training, validation, and test splits as well as an out-of-distribution set to test the robustness of different models. ...The training data comprises 10k floor layouts, while test and validation sets each contain 1k unseen layouts, Tx, and Rx locations. Again, we introduce an OOD validation set with 5 layouts where we manually remove parts of the walls such that two rooms become connected.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU/CPU models or memory specifications. It only mentions inference speed comparisons without specifying the underlying hardware.
Software Dependencies No The paper mentions software like "Wireless In Site" (Remcom) and "Sionna RT" (Hoydis et al., 2022) but does not provide specific version numbers for these or any other ancillary software components. It also mentions "Python" but without a specific version number nor any versioned libraries.
Experiment Setup Yes All models are trained on the mean squared error between the model output and the total received power in dBm. We use a batch size of 64 (unless for SEGNN, where we use a smaller batch size due to memory limitations), the Adam optimizer, an initial learning rate of 10-3, and a cosine annealing scheduler. Models are trained for 5*10^5 steps on the Wi3R dataset and for 2*10^5 steps on the Wi PTR dataset.