GPU-Accelerated Parallel Bilevel Optimization for Roubst 6G ISAC

Authors: Xingdi Chen, Kai Yang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments have been conducted to evaluate the performances of proposed methods. In particular, the proposed GPU-accelerated parallel bilevel optimization can accelerate the convergence speed by up to 50 times compared to conventional gradient-based methods.
Researcher Affiliation Academia Xingdi Chen, Kai Yang* School of Computer Science and Technology, Tongji University, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1: BOBLRBF: Bi-Objective Bi Level optimization based Robust Beam Forming.
Open Source Code No The paper does not explicitly state that source code for the methodology is provided, nor does it include a link to a code repository.
Open Datasets No In the simulation, we consider the networked ISAC scenario with L = 3 BSs. BSs are deployed with uniform linear arrays (ULAs) with half-wavelength spacing between consecutive antennas, and each BS serves one communication user. The number of antennas at each BS is N = 5. BSs are located at ( 40m, 40 3m), (80m, 0m) and ( 40m, 40 3m), respectively, while communication users are randomly distributed around BSs. There are J = 2 targets located at (2m, 5m) and ( 5m, 1m). The communication channels between BSs and communication users are set as Rayleigh fading following the standard assumption, i.e., each channel coefficient hl,c,k is generated according to a complex standard normal distribution, with zero mean and unit variance.
Dataset Splits No The paper describes a simulated environment and does not mention any dataset splits (e.g., training, validation, test) for reproducibility. Data is generated according to specific parameters.
Hardware Specification Yes The algorithm BOBLRBF is executed on a machine equipped with 12th Gen Intel(R) Core(TM) i7-12700H and the algorithm BOBLRBF-DNN runs on NVIDIA Ge Force RTX 3060.
Software Dependencies No The paper mentions that W y, y = 1, . . . , Y 1 were set as multilayer perceptrons (MLPs), but does not provide specific software names or version numbers for libraries, frameworks, or programming languages used (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes In the simulation, we consider the networked ISAC scenario with L = 3 BSs. ... The number of antennas at each BS is N = 5. BSs are located at ( 40m, 40 3m), (80m, 0m) and ( 40m, 40 3m), respectively... There are J = 2 targets located at (2m, 5m) and ( 5m, 1m)... The power budgets {Pl} of all BSs are set to be 40 dBm... In experiments, we set W y, y = 1, . . . , Y 1 as multilayer perceptrons (MLPs).