DONIS: Importance Sampling for Training Physics-Informed DeepONet
Authors: Shudong Huang, Rui Huang, Ming Hu, Wentao Feng, Jiancheng Lv
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we validate the proposed methods through a series of experiments and evaluate their performance using the specified metrics.The implementation of these methods is built upon the Deep XDE library [Lu et al., 2021b], with the complete code available in the supplementary materials for reference. Table 1 presents the hyperparameters used in our experiments. ... Figure 2 illustrates that DONIS accelerates loss convergence and achieves lower prediction error over the course of the training procedure. Figure 3 and the metrics in Table 2 demonstrate a consistent reduction in prediction error when applying DONIS-C, DONIS-F, or both in combination. |
| Researcher Affiliation | Academia | Shudong Huang1,2 , Rui Huang1,2 , Ming Hu1,2 , Wentao Feng1,2 and Jiancheng Lv1,2 1College of Computer Science, Sichuan University, Chengdu 610065, China 2Engineering Research Center of Machine Learning and Industry Intelligence, Ministry of Education, Chengdu 610065, China EMAIL, EMAIL, EMAIL, |
| Pseudocode | Yes | Algorithm 1 Importance Sampling of Functions 1: Input: Dataset of functions Nf = {f i}|Nf | i=1 and seed points Sf = {an}|Sf | n=1. 2: Parameter: model weights θ. 3: Output: Batch of functions Mf = {f i}|Mf | i=1 4: for i = 1 to |Nf| do 5: Calculate {lr (f i, an; θ) |an Sf} 6: Compute qf i according to Eq (11) 7: end for 8: Mf sampled according to qf i 9: return Mf ... Algorithm 2 Importance Sampling of Collocation Points 1: Input: Batch of functions Mf = {f i}|Mf | i=1 , sets of seed points S = {Sc,i}|Mf | i=1 where Sc,i = {bi,l}|Mf |,|Sc| i=1,l=1 . |
| Open Source Code | Yes | 1Code is available at https://github.com/ruihuang-1/donis. |
| Open Datasets | No | We consider three partial differential equation scenarios: the Allen-Cahn equation, viscous Burgers equation, and a nonlinear diffusion-reaction equation. ... where the initial conditions are generated based on the Exponential Sine Squared kernel: ... where the initial conditions are generated based on the Radial Basis Function (RBF) kernel: ... We first train the model without any labeled data within the spatiotemporal domain Ωand then evaluate its performance using results obtained from traditional numerical solvers. |
| Dataset Splits | No | The operator-fitting approach inherently resembles parallel fitting of a class of functions, posing greater challenges in terms of labeled data requirements compared to PINNs. ... We first train the model without any labeled data within the spatiotemporal domain Ωand then evaluate its performance using results obtained from traditional numerical solvers. ... Table 1: Network and training parameters. |Nf| and |Nc| denotes the size of dataset of functions and collocation points, |Mf| and |Mc| is batch size of functions and collocation points. |
| Hardware Specification | No | No specific hardware details (GPU models, CPU models, or cloud platforms) were found in the paper's main text or supplementary information. |
| Software Dependencies | No | The implementation of these methods is built upon the Deep XDE library [Lu et al., 2021b], with the complete code available in the supplementary materials for reference. |
| Experiment Setup | Yes | Table 1 presents the hyperparameters used in our experiments. ... Table 1: Network and training parameters. Parameter Value Branch Net [128, 128, 128, 128, 128, 128] Trunk Net [2, 128, 128, 128, 128, 128] Epoch 30,000 Learning Rate 5 10 4 |Nf| 1000 |Nc| 10000 |Mf| 50 |Mc| 2000 ... We set the size of seed points as 50 for Sf and 200 for Sc for this problem. |