NEAR: Neural Electromagnetic Array Response
Authors: Yinyan Bu, Jiajie Yu, Kai Zheng, Xinyu Zhang, Piya Pal
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive simulations and real-world experiments using radar platforms demonstrate NEAR s effectiveness and its ability to adapt to unseen environments. Section 5.1 is titled "Simulation Tasks" and Section 5.2 is titled "Real-world Experiments". The paper reports evaluation metrics such as NRMSE, Resolution Probability, and DOA estimation error in tables and figures. An "Ablation Study" is also conducted. |
| Researcher Affiliation | Academia | All authors are affiliated with "Department of Electrical and Computer Engineering, University of California San Diego (UCSD), La Jolla, United States." The provided email addresses (EMAIL, EMAIL) also indicate an academic institution. |
| Pseudocode | No | The paper describes the methodology in detail, but it does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | The codes are available at: https: //github.com/J1mmy Yu1/NEAR. |
| Open Datasets | No | The paper conducts "Simulation Tasks" where data is generated as described in Section C.1.1 and "Real-world Experiments" using a "commercial MIMO radar platform (IMAGEVK-74)" where data is collected. No public dataset used for the main experiments is provided with access information. While it mentions other datasets in the related work (e.g., nuscenes, Coloradar), these are not used for the paper's experiments. |
| Dataset Splits | No | For simulation tasks, the paper provides "selected indices for sub-sampling" for 6x6, 8x8, and 10x10 configurations (e.g., "6x6: Sx = Sy = {0, 1, 2, 3, 11, 19}") and mentions running experiments with "N = 50 Monte-Carlo trails". For real-world experiments, it states, "we select a subset of data and treat it as a sparse set of measurements." These describe data sampling or trial repetitions, not explicit training, validation, and test dataset splits typically required for reproducibility in machine learning. |
| Hardware Specification | Yes | All experiments are run on a laptop with CPU AMD Ryzen 9 5900 HS with Radeon Graphics and GPU NVIDIA Ge Force RTX 3050 Ti Laptop. |
| Software Dependencies | No | The paper mentions using the "Adam optimizer" and the "CVX (CVX Research, 2012; Grant & Boyd, 2008) toolbox". CVX version 2.0 is mentioned in its citation. However, it does not specify versions for other key software components like programming languages (e.g., Python, MATLAB), deep learning frameworks (e.g., PyTorch, TensorFlow), or CUDA versions, which are necessary for reproducible setup. |
| Experiment Setup | Yes | We optimize the loss function defined in (11) through a two-stage training process. In the initial warm-up stage, we set λ = 0 and optimize using the Adam optimizer with β = (0.9, 0.999) and a weight decay of 10^-4. Letting Θo = arg minΘ Ld, we use the obtained parameters as the initialization for the next stage. In the adaptation/training stage, we optimize Θ, m1, and m2 using Adam with the same configuration as in the warm-up stage. In both the simulation and real-world experiments, we normalized the input coordinates to the range (-1, 1]. For simulation tasks, we use a learning rate of 10^-4 and train for 5,000 epochs in the warm-up stage. In the adaptation stage, we set λ = 0.5, lrΘ = 10^-3, lrm1,m2 = 3x10^-3, and train for 25,000 epochs, with Kmax set to the exact number of targets for each scenario. For real-world experiments, we adopt a learning rate of 10^-4 and train for 10,000 epochs in the warm-up stage. In the adaptation stage, we set λ = 1, lrΘ = 10^-3, lrm1,m2 = 3x10^-3, and train for 50,000 epochs. Here, we set Kmax = 4 as an upper bound on the number of targets in each range bin, as typically, the number of targets within a single range bin is very small (Sun et al., 2020). |