Neural Wave Equation for Irregularly Sampled Sequence Data
Authors: Arkaprava Majumdar, M Anand Krishna, P. K. Srijith
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on several sequence labeling problems involving irregularly sampled sequence data and demonstrate the superior performance of the proposed neural wave equation model. ... Our experiments, conducted on diverse datasets such as person activity recognition, Walker2d kinematic simulation Lechner & Hasani (2020), sepsis (Physio Net 2019)Reyna et al. (2019) and stance classification Derczynski et al. (2017) demonstrate the superior performance of neural wave equation models over existing baselines for sequence labeling problems. |
| Researcher Affiliation | Academia | Arkaprava Majumdar1, M Anand Krishna1, and P.K. Srijith1 1Indian Institute of Technology, Hyderabad |
| Pseudocode | Yes | A.10 NEURAL WAVE EQUATION ALGORITHM Algorithm 1 Neural Wave Equation |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code for the methodology described, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | The performance of the proposed Neural wave models was assessed through experiments on datasets containing irregular sequence data such as person activity recognition Markelle Kelly (2000), walker2d-v2 kinematic simulation Lechner & Hasani (2020), Physio Net sepsis prediction, and stance classification of social media posts Derczynski et al. (2017). |
| Dataset Splits | Yes | Person activity recognition: Data is segmented into overlapping 32-step intervals with a 16-step overlap, yielding 7,769 training and 1,942 testing sequences. ... Walker2D dataset: The data is partitioned into 9,684 training sequences, 1,937 for testing, and 1,272 for validation. ... Sepsis prediction: We divided our data into a train, validation and test split of 70%, 15 % and 15% respectively. |
| Hardware Specification | Yes | Model training is conducted on an Nvidia Tesla V-100 32GB GPU and an L4 GPU. |
| Software Dependencies | No | We use the Tsit5 from the package torchdyn Poli et al. as our adaptive solver, which is an efficient reimplementation of the Dopri45 by the Julia Computing group Rackauckas & Nie (2017). ... The information about the step size and the ODESolvers that have been used for all our models and baselines is mentioned in Table 4 in Appendix A.13. |
| Experiment Setup | Yes | The experimented configuration includes setting the hidden state dimension to 64 for all source functions, keeping a minibatch size of 256, use of the Adam optimizer, a learning rate of 5 10 3, and 200 training epochs. ... The first MLP layer is a single layer with hidden dimension 64. The last MLP layer is also a single layer with hidden dimension equal to the output size. ... The Neural Wave model employs the Dopri5/Tsit5 method, setting the absolute and relative tolerance levels to 1e 3. A scheduled learning rate decay strategy is implemented, with a decay coefficient γ = 0.1, activated at the 100th epoch. |