ConSense: Continually Sensing Human Activity with WiFi via Growing and Picking

Authors: Rong Li, Tao Deng, Siwei Feng, Mingjie Sun, Juncheng Jia

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation results on three public Wi Fi datasets demonstrate that Con Sense not only outperforms several competitive approaches but also requires fewer parameters, highlighting its practical utility in class-incremental scenarios for HAR. We validate our proposed framework on three publicly available Wi Fi datasets, confirming that Con Sense exceeds the performance of other models while utilizing fewer parameters. Evaluation Metrics We use two metrics, i.e., the average accuracy and average forgetting measure the performance of Con Sense on all the classes seen so far. Comparative Results Performance Comparison Tables 2 and 3 present the results of the average accuracy A and the average forgetting F, respectively. Ablation Test
Researcher Affiliation Academia Rong Li, Tao Deng*, Siwei Feng*, Mingjie Sun, Juncheng Jia School of Computer Science and Technology, Soochow University, China EMAIL, EMAIL
Pseudocode No The paper describes the methods and procedures using prose and mathematical equations but does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Code https://github.com/kikihub/consense
Open Datasets Yes Evaluation results on three public Wi Fi datasets demonstrate that Con Sense... We selected the Wi AR (Guo et al. 2019), MMFi (Yang et al. 2024), and XRF (Wang et al. 2024) datasets, which offer a broader range of categories.
Dataset Splits Yes Wi AR consists of 480 CSI samples, evenly distributed across 16 distinct classes. We divided the dataset into training and testing subsets at a 4:1 ratio. Each class contains 20 samples, with 14 samples per class allocated for training and the remaining 6 used for testing.
Hardware Specification Yes Our method is implemented by Py Torch (Paszke et al. 2019) and trained on NVIDIA A5000 GPU with 32GB memory.
Software Dependencies No Our method is implemented by Py Torch (Paszke et al. 2019)... The optimizer chosen is Adam (Kingma and Ba 2014)... While PyTorch is mentioned, a specific version number for it or any other key software dependency is not provided.
Experiment Setup Yes We set the number of Gaussian distributions in the positional encoding to 10... The standard deviation of the Gaussian distributions on all the datasets is uniformly set to 8. The number of stacks in the module is set to 1. The input dimensions for the three datasets are set to 90, 342, and 270, respectively, while maintaining a consistent number of heads at 9 for each, and employing a dropout rate of 0.1. The optimizer chosen is Adam (Kingma and Ba 2014), with an initial learning rate of 0.001 and a batch size of 16. The model s training cycle is set to 50 epochs.