RFMamba: Frequency-Aware State Space Model for RF-Based Human-Centric Perception
Authors: Rui Zhang, Ruixu Geng, Yadong Li, Ruiyuan Song, Hanqin Gong, Dongheng Zhang, Yang Hu, Yan Chen
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate the superior performance of our proposed RFMamba across all three downstream tasks. To the best of our knowledge, RFMamba is the first attempt to introduce SSM into RF-based human-centric perception. |
| Researcher Affiliation | Academia | 1University of Science and Technology of China, 2University of Washington EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the model architecture and processes in detail, including figures like Figure 2 and Figure 3, but does not include a distinct pseudocode or algorithm block. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code or a link to a code repository. |
| Open Datasets | No | To address this critical gap, we introduce the Through-Wall Human Centric Perception (THP) dataset, a comprehensive dataset for the through-wall human-centric perception tasks. However, the paper does not provide explicit access information (link, DOI, or statement of public availability) for the THP dataset. |
| Dataset Splits | Yes | The dataset was split into a 4:1 training-testing ratio, using a fixed random seed of 42. |
| Hardware Specification | Yes | All baselines and our RFMamba are trained using an Nvidia RTX4090 GPU |
| Software Dependencies | No | All baselines and our RFMamba are trained using an Nvidia RTX4090 GPU and implemented with Py Torch. No specific version numbers for PyTorch or other libraries are provided. |
| Experiment Setup | Yes | We used the Adam optimizer with an initial learning rate of 2e-3, which decays by a factor of 0.5 (gamma) every 10 epochs using the Step LR scheduler. The batch size was set to 50, and training epochs were set to 50 for all models except Radar Former (1000 epochs due to slower convergence). |