Neural Functions for Learning Periodic Signal

Authors: Woojin Cho, Minju Jo, Kookjin Lee, Noseong Park

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the efficacy of the proposed method through comprehensive experiments, including the learning of the periodic solutions for differential equations, and time series imputation (interpolation) and forecasting (extrapolation) on real-world datasets. In this section, we evaluate the performance of Ne RT on several benchmark problems. We have largely two sets of experiments: Synthetic data and real-world datasets. To assess whether underlying signals are learned effectively, we formulate tasks as either interpolation or extrapolation tasks; that is, given a discrete measurements of the underlying signal, we train the proposed model and test it in either interpolation (imputation) or extrapolation (forecasting) settings.
Researcher Affiliation Collaboration 1Tele PIX, 2LG CNS, 3Arizona State University, 4KAIST EMAIL, EMAIL
Pseudocode Yes Here, we consider M-variate sequences measured at N collocation points (See Appendix F for a pseudo-code like algorithm). To provide detailed explanations of the training process of Ne RT, we present the following training Algorithm 1.
Open Source Code No All implementations of our proposed method can be reproduced by referring to the attached README.md. To benefit the community, the code will be released online.
Open Datasets Yes In the scientific domain, datasets often exhibit periodic characteristics, and analyzing these pattern is crucial in the processing of understanding the data. We train and evaluate Ne RT and the other existing baselines using a two scientific dataset: harmonic oscillation and 2D-Helmholtz equations (Mc Clenny & Braga-Neto, 2020). For periodic time series experiments, we select four uni-variate time series datasets, i.e., Electricity, Traffic, Caiso, and NP, which are all famous benchmark datasets used in Fan et al. (2022) and are known to have some periodic patterns. The benchmark datasets of the long-term series forecasting task (Wen et al., 2022) are used for our experiments.
Dataset Splits Yes The training (yellow) and the test (red) regions are separated by the vertical bar (x = 2.2). The daily maximum temperature data from April 2008 to April 2014 is used as the training dataset, while the periods from April 2004 to April 2008 and from April 2014 to April 2017 are used for test. We randomly drop 30%, 50%, and 70% observations and evaluate the interpolating performance using MSE as a metric. We design the experiment using the first 10 samples, each of which consists of 12 blocks (cf. Figure 17), from each dataset and conduct an experiment to fill in the values of missing blocks. In a sample, we perform both the interpolation and the extrapolation tasks. Detailed locations and constructions are summarized in Figure 17. As shown in Figure 17, in a sample there are three interpolation blocks colored in red, three extrapolation blocks colored in blue, one validation block colored in green, and the remaining yellow parts represent the training dataset. Each block has a length of 500. For each data sample, we fix the total length to 2,880 with a train size of 1,440 and a validation and test sizes of 720.
Hardware Specification Yes The experiments are conducted on systems equipped with Intel Core-i9 CPUs and NVIDIA RTX A6000, A5000 GPUs.
Software Dependencies Yes We implement Ne RT and baselines with PYTHON 3.9.7 and PYTORCH 1.13.0 (Paszke et al., 2019) that supports CUDA 11.6.
Experiment Setup Yes All models are trained with Adam (Kingma & Ba, 2014) with a learning rate of 0.001. In addition, we use eight existing time series models including Transformer-based and NODE-based models as non-INR baselines (cf. Appendix J). See the full description of the experimental setup in Appendix G. We set md = 1, bd = 0 (undamping) or 4 (damping), ωd = 50, and Ad = 10. All experiments employ 2,000 epochs. For hyperparameters, we set Smax to 1 and for the fair comparison, we use similar model sizes for all methods and share the frequencies across the models. There are two frequencies used as hyperparameters, ωinit and ωinner. ωinit is used in our learnable Fourier feature mapping and corresponds to b in Equation 2. ωinner denotes the frequency of the sinusoidal function ρs in Equation 3. For the number of layers, we set Lt, Lf and Ls to 2, and Lp to 5. The best hyperparameter configurations of Ne RT in the periodic time series task are summarized in Table 9.