Predicting Spectral Information for Self-Supervised Signal Classification
Authors: Yi Xu, Shuang Wang, Hantong Xing, Chenxu Wang, Dou Quan, Rui Yang, Dong Zhao, Luyang Mei
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results validate the superiority of the SGSSC method. For instance, when the proportion of labeled samples is only 0.5%, our method achieves an average improvement of 2.3% in downstream classification tasks compared to the best-performing self-supervised training strategies. Section 4 is dedicated to "Experiment" and includes subsections such as "4.1 Dataset", "4.3 Overall Performance", and "4.6 Ablation Experiment" with tables and figures presenting quantitative results. |
| Researcher Affiliation | Academia | The authors' affiliations are listed as "Yi Xu , Shuang Wang , Hantong Xing , Chenxu Wang , Dou Quan , Rui Yang , Dong Zhao , Luyang Mei Xidian University". All authors are affiliated with Xidian University, which is an academic institution. |
| Pseudocode | No | The paper describes the methodology using prose and mathematical equations (e.g., Equations 3 and 4) and a pipeline diagram (Figure 2), but it does not contain any clearly labeled pseudocode blocks or algorithms in a structured, code-like format. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide a link to a code repository or mention code in supplementary materials for the methodology described in the paper. |
| Open Datasets | Yes | We adopted the data generation methodology used in the publicly available RML2016.10a dataset [O shea and West, 2016] to create datasets... We further evaluated the effectiveness of our method using six time series datasets: Human Activity Recognition (HAR) [Anguita et al., 2013], Epilepsy Seizure Prediction (ESP) [Andrzejak et al., 2001], and several datasets from the UCR Repository [Dau et al., 2019], including Wafer, Phalanges Outlines Correct (POC), Proximal Phalanx Outline Correct (PPOC), and Star Light Curves (SLC). |
| Dataset Splits | Yes | For all datasets, we performed a split, with 60% used for self-supervised training, 20% for validation, and 20% for testing. During fine-tuning on downstream tasks, we randomly sampled the corresponding proportion of data from the self-supervised training set and utilized their labels. |
| Hardware Specification | Yes | All experiments were conducted on a Ge Force RTX 3090, and the reported results represent the average performance over five independent runs. |
| Software Dependencies | No | The paper mentions the use of an "Adam optimizer" and states that "Our model architecture is similar to Vi T", but it does not specify version numbers for any software libraries (e.g., Python, PyTorch, TensorFlow) or specialized tools used in the experiments. |
| Experiment Setup | Yes | In the self-supervised training experiment, we used a mask ratio of 0.7, the Adam optimizer with a learning rate of 0.0001, a batch size of 64, and trained for a total of 500 epochs. For the fine-tuning experiments on downstream tasks, we employed the Adam optimizer, setting the learning rate for the classifier to 0.06 and a batch size of 64. Our model architecture is similar to Vi T, with a patch size of 1 16, 8 layers, a hidden size of 128, an MLP size of 1024, and 8 attention heads. |