Discovering Physics Laws of Dynamical Systems via Invariant Function Learning

Authors: Shurui Gui, Xiner Li, Shuiwang Ji

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments to address the following research questions (RQs) and supplementary analyses (SAs). RQ1: Are existing meta-learning and invariant learning techniques effective for extracting invariant functions? RQ2: Can the proposed invariant function learning principle outperform baseline techniques? SA1: How do the full functions f and the invariant functions fc differ in performance? SA2: Are the extracted invariant functions explainable and aligned with the true invariant mechanisms? SA3: How will performance change given different lengths of inputs and types of environments? (See Appx. F.2) SA4: Is the proposed hypernetwork implementation more efficient than previous implementations? (See Appx. G)
Researcher Affiliation Academia 1Department of Computer Science & Engineering, Texas A&M University, College Station, Texas, USA. Correspondence to: Shuiwang Ji <EMAIL>.
Pseudocode No The paper describes methods and objectives in prose and mathematical formulations, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code has been released as part of the AIRS library (https://github.com/divelab/AIRS/).
Open Datasets No In our experiments, we introduce three multi-environment (ME) datasets, ME-Pendulum, ME-Lotka-Volterra, and ME-SIREpidemic. These three datasets are generated by simulators modified from the Damped Pendulum (Yin et al., 2021b), Lotka-Volterra (Ahmad, 1993), and SIREpidemic (Wang et al., 2021).
Dataset Splits Yes Each of these datasets includes 1000 samples, where 800 and 200 samples are assigned to training set and test set, respectively.
Hardware Specification Yes The deployment environments are Ubuntu 20.04 with 48 Intel(R) Xeon(R) Silver, 4214R CPU @ 2.40GHz, 755GB RAM, and graphics cards NVIDIA RTX 2080Ti.
Software Dependencies No Our implementation is under the architecture of Py Torch (Paszke et al., 2019). The deployment environments are Ubuntu 20.04 with 48 Intel(R) Xeon(R) Silver, 4214R CPU @ 2.40GHz, 755GB RAM, and graphics cards NVIDIA RTX 2080Ti.
Experiment Setup Yes We conduct experiments on 800-sample training sets with a training batch size of 32, which leads to 25 iterations per epoch. For each run, we optimize the neural network with 2,000 epochs, which is equivalent to 50,000 iterations. Given fixed learning iterations, the learning rate is selected from U(1e 4, 1e 3). The most critical hyper-parameters are λc and λadv which control the information overlap between fc and f.