Learn Singularly Perturbed Solutions via Homotopy Dynamics

Authors: Chuqi Chen, Yahong Yang, Yang Xiang, Wenrui Hao

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the method on diverse problems, including the Allen Cahn equation, high-dimensional Helmholtz equation, and operator learning for Burgers equation (Section 5). 5. Experiments We conduct several experiments across different problem settings to assess the efficiency of our proposed method.
Researcher Affiliation Academia 1Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong SAR, China 2Department of Mathematics, The Pennsylvania State University, PA, USA 3Algorithms of Machine Learning and Autonomous Driving Research Lab, HKUST Shenzhen-Hong Kong Collaborative Innovation Research Institute, Futian, Shenzhen, China . Correspondence to: Yahong Yang <EMAIL>.
Pseudocode Yes Algorithm 1 Homotopy Dynamics Path Tracking
Open Source Code No The paper does not contain any explicit statement about providing source code, nor does it include a link to a code repository.
Open Datasets No The paper primarily deals with solving Partial Differential Equations (PDEs) such as the Allen-Cahn equation, Helmholtz equation, and Burgers equation, which are defined mathematically. While it mentions generating initial conditions for the Burgers equation from a Gaussian random field, it does not refer to or provide access information for any pre-existing, publicly available open datasets in the traditional sense.
Dataset Splits No The paper mentions the "number of residual points nres = 200 and number of boundary points nbc = 2" for the 1D Allen-Cahn equation, and similar counts for other problems. For the Burgers equation, it states "We utilize a spatial resolution of 128 grids to represent both the input and output functions." These specify sampling densities for problem definition and solution representation, but not traditional training, validation, or test dataset splits in terms of data partitioning from a larger dataset.
Hardware Specification Yes Each experiment is run on a single NVIDIA 3070Ti GPU using CUDA 11.8.
Software Dependencies Yes We develop our experiments in Py Torch 1.12.1 (Paszke et al., 2019) with Python 3.9.12.
Experiment Setup Yes We use multilayer perceptrons (MLPs) with tanh activations and three hidden layers with width 30. We initialize these networks with the Xavier normal initialization (Glorot & Bengio, 2010) and all biases equal to zero. Training. We use Adam to train the neural network and we tune the learning rate by a grid search on {10-5, 10-4, 10-3, 10-2}. All iterations continue until the loss stabilizes and no longer decreases significantly.