Robust Automatic Modulation Classification with Fuzzy Regularization

Authors: Xinyan Liang, Ruijie Sang, Yuhua Qian, Qian Guo, Feijiang Li, Liang Du

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on benchmark datasets demonstrate that the FR achieves superior classification accuracy and robustness compared to compared methods, making it a promising solution for real-world spectrum management and communication applications.
Researcher Affiliation Academia 1Institute of Big Data Science and Industry, Key Laboratory of Evolutionary Science Intelligence of Shanxi Province, Shanxi University, Taiyuan, China 2Shanxi Key Laboratory of Big Data Analysis and Parallel Computing, School of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan, China. Correspondence to: Yuhua Qian <EMAIL>.
Pseudocode No The paper describes the proposed method mathematically using equations (1) to (12) and provides diagrams, but it does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes 1The code is available at https://github.com/ ruijiesang/FR-AMC.
Open Datasets Yes We evaluate FR on the RADIOML 2016.10a, RADIOML 2016.10b, and RADIOML 2018.01A datasets. The results demonstrate that Fuzzy Regularization not only enhances the model s robustness but also improves its convergence speed to a certain extent. Our datasets mainly come from publicly available wireless modulation recognition datasets. We evaluated the effectiveness of FR in suppressing prediction ambiguity on six signal datasets: Radio ML2016.10a, Radio ML2016.10b, RADIOML 2018.01 A and their corresponding noise versions including Noise2016a, Noise2016b and Noise2018. The dataset can be accessed for download from the following URL: https://www.deepsig.io/datasets.
Dataset Splits No The paper describes the structure of the datasets (e.g., number of samples per SNR and modulation type) and mentions using a subset with SNR 0 or higher, but it does not explicitly provide the training/test/validation split ratios or methodology for reproducing the splits.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., Python version, library versions like PyTorch or TensorFlow).
Experiment Setup No Section 4.3 states: 'all controllable parameters including the learning rate, random seeds, and model initialization were kept consistent before and after applying FR.' However, the paper does not explicitly state the actual values of these hyperparameters or other system-level training settings in the main text.