Physics-Informed Generative Modeling of Wireless Channels
Authors: Benedikt Böck, Andreas Oeldemann, Timo Mayer, Francesco Rossetto, Wolfgang Utschick
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6. Experiments 6.1. Experimental Setup Datasets For evaluation, we use four datasets. We use a modified standardized 3GPP spatial channel model for SIMO, which we adapted to better illustrate our method. For simulations with OFDM, we use two different Qua DRi Gabased datasets (Jaeckel et al., 2014). One (5G-Urban) represents an urban macro-cell, in which users can be in lineof-sight (LOS), non-LOS (NLOS), as well as indoor and outdoor. The other (5G-Rural) represents a rural macrocell, in which all users are in LOS. We also use the ray tracing database Deep MIMO (Alkhateeb, 2019) for SIMO in Appendix F. For the channel observations in OFDM, we generate one random selection matrix A extracting M entries from h and apply it to every training channel (cf. (9)). In all simulations and each training sample, we draw signal-to-noise ratios (SNRs) uniformly distributed between 5 and 20d B, defining the noise variance σ2 i (cf. Section 4). A detailed description of the datasets, chosen configurations, and pre-processing is given in Appendix D. Evaluation metrics We evaluate the parameter generation performance by the power angular profile P (R) ω (q) = 1 Ntest |s(q) R,i|2 PSR/2 1 g= SR/2 |s(g) R,i|2 (15) as well as the channel-wise angular spread S(R) ω (s R) = PSR/2 1 g= SR/2(ω(R) g µ(g) R )2|s(g) R |2 PSR/2 1 g= SR/2 |s(g) R |2 (16) with s(g) R,i being the gth entry in the ith newly generated sample s R,i. Moreover, ω(R) g = gπ/SR and µ(g) R,i = (PSR/2 1 g= SR/2 ω(R) g |s(g) R,i|2)/(PSR/2 1 g= SR/2 |s(g) R,i|2) (Zhang et al., 2017). For the channel generation performance, we map newly generated s R (or st,f) to h using a dictionary DR (or Dt,f) (cf. Section 3.2) and evaluate the channel generation with the cross-validation method from (Baur et al., 2025; Xiao et al., 2022). Specifically, we first train each generative model using Y (cf. Section 4) and generate N(gen) channels with each model to train an autoencoder for reconstruction by minimizing the mean squared error (MSE) for each generative model separately. We then compress and reconstruct groundtruth (i.e., Qua DRi Ga) channels using these trained autoencoders and evaluate the normalized MSE (NMSE) n MSE = 1/Ntest PNtest i=1( ˆhi hi 2 2/N and the cosine similarity ρc = 1/Ntest PNtest i=1(|ˆh H i hi|/( ˆhi 2 hi 2)) with hi and ˆhi being the ground-truth and reconstructed channel. 6.2. Results Modified 3GPP In Fig. 5 a), the power angular profile P (R) ω (q) as well as a histogram of the angular spread S(R) ω (s R) is given. The number of antennas N = M is set to 16, and the number of gridpoints S is set to 256. The number Nt of training samples is 10 000. In Fig. 5 b) and c), exemplary training samples and newly generated samples are shown. In general, all power angular profiles are consistent with ground truth by, e.g., not assigning power to directions absent in the ground-truth profile. |
| Researcher Affiliation | Collaboration | 1Technical University of Munich, Munich, Germany 2Rohde & Schwarz, Munich, Germany. Correspondence to: Benedikt B ock <EMAIL>. |
| Pseudocode | Yes | G. Pseudocode for Parameter and Channel Generation and Implementation Specifications Algorithms 1-3 summarize the generation process of parameters, channels, and channels with constraining the path number, respectively. All models and experiments have been implemented in python 2.1.2 using pytorch 3.10.13, and pytorch-cuda 12.1. All simulations have been carried out on a NVIDIA A40 GPU. |
| Open Source Code | Yes | 1Source code is available at https://github.com/ beneboeck/phy-inf-gen-mod-wireless. |
| Open Datasets | Yes | Datasets For evaluation, we use four datasets. We use a modified standardized 3GPP spatial channel model for SIMO, which we adapted to better illustrate our method. For simulations with OFDM, we use two different Qua DRi Gabased datasets (Jaeckel et al., 2014). One (5G-Urban) represents an urban macro-cell, in which users can be in lineof-sight (LOS), non-LOS (NLOS), as well as indoor and outdoor. The other (5G-Rural) represents a rural macrocell, in which all users are in LOS. We also use the ray tracing database Deep MIMO (Alkhateeb, 2019) for SIMO in Appendix F. ... Qua DRi Ga is a freely accessible geometry-based stochastic simulation platform for wireless channels (Jaeckel et al., 2023). ... The Deep MIMO dataset proposed in (Alkhateeb, 2019) is a benchmark dataset for wireless channel modeling, building on the ray tracing tool Remcom (Remcom) and offering several predefined scenarios. |
| Dataset Splits | Yes | For producing the training dataset, we draw SNRi uniformly between 0d B and 20d B for each training sample and compute the corresponding noise variance σ2 i . Subsequently, we generate the training dataset Y = {yi | yi = hi + ni} (49) with ni NC(0, σ2 i I). ... For that, we took the model resulting in the largest evidence lower bound (ELBO) over a validation set of 5000 samples. ... For Qua DRi Ga, we used 10000 validation channel realizations. ... We then compress and reconstruct groundtruth (i.e., Qua DRi Ga) channels using these trained autoencoders and evaluate the normalized MSE (NMSE) n MSE = 1/Ntest PNtest i=1( ˆhi hi 2 2/N and the cosine similarity ρc = 1/Ntest PNtest i=1(|ˆh H i hi|/( ˆhi 2 hi 2)) with hi and ˆhi being the ground-truth and reconstructed channel. |
| Hardware Specification | Yes | All simulations have been carried out on a NVIDIA A40 GPU. |
| Software Dependencies | Yes | All models and experiments have been implemented in python 2.1.2 using pytorch 3.10.13, and pytorch-cuda 12.1. |
| Experiment Setup | Yes | The number of antennas N = M is set to 16, and the number of gridpoints S is set to 256. The number Nt of training samples is 10 000. ... We applied hyperparameter tuning for each simulation setup to adjust the width and the depth d for the encoder as well as the linear layer width, the number of convolutional channels in the decoder, and the learning rate. For that, we took the model resulting in the largest evidence lower bound (ELBO) over a validation set of 5000 samples. For the optimization, we used the Adam optimizer (Kingma & Ba, 2015). ... The only hyperparameter for CSGMM is the number K of components. ... In all simulations and each training sample, we draw signal-to-noise ratios (SNRs) uniformly distributed between 5 and 20d B, defining the noise variance σ2 i (cf. Section 4). |