A Lightweight Sparse Interaction Network for Time Series Forecasting
Authors: Xu Zhang, Qitong Wang, Peng Wang, Wei Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on public datasets show that LSINet achieves both higher accuracy and better efficiency than advanced linear models and transformer models in TSF tasks. |
| Researcher Affiliation | Academia | 1Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China 2Universite Paris Cite, Paris, France EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the architecture and mechanisms in detail through text and diagrams (Figure 3), but it does not include any explicitly labeled pseudocode or algorithm blocks with structured steps. |
| Open Source Code | No | The paper does not provide any concrete statement about releasing source code for the methodology described, nor does it include any links to a code repository. |
| Open Datasets | Yes | We evaluate the performance of the proposed LSINet on 6 popular datasets, including Weather, Electricity, and 4 ETT datasets, covering a range of time steps (17420 to 69680) and variables (7 to 321) and have been widely employed in the literature for multivariate forecasting tasks (Nie et al. 2023; Wu et al. 2021; Zhou et al. 2022b). |
| Dataset Splits | Yes | All methods follow the same data loading parameters (e.g., train/val/test split ratio) as in (Nie et al. 2023). |
| Hardware Specification | Yes | Experiments are conducted on NVIDIA Ge Force RTX 3090 GPU on Py Torch. |
| Software Dependencies | No | The paper mentions 'Py Torch' as the framework used, but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | For LSINet, the hidden size for patch embedding, position embedding (Eq. 2), and all used MLPs are fixed at 128. The multi-head h is fixed at 4. η {1, 3} is used for controlling the interval of using sparse regularization loss. The number of patch e N is fixed at 64 for sparse interaction learning. δ for controlling sparsity is fixed at 0.15, i.e., the sparse rate of C is 0.85. The number of stacked STI modules is fixed at 1 on all datasets. ... The learning rate is fixed at 1e-4. The batch size for 4 ETT datasets is fixed at 128 while for Weather and Electricity datasets are fixed at 64 and 32 respectively. All methods follow the same data loading parameters (e.g., train/val/test split ratio) as in (Nie et al. 2023). For each experiment, we independently ran 5 times with 5 different seeds for 30 epochs and reported the average metrics and standard deviations. |