Score-based free-form architectures for high-dimensional Fokker-Planck equations
Authors: Feng Liu, Faguo Wu, Xiao Zhang
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness on various high-dimensional steady-state Fokker-Planck (SFP) equations, achieving superior accuracy and over a 20 speedup compared to stateof-the-art methods. Experimental results highlights the potential as a universal fast solver for handling more than 20-dimensional SFP equations, with great gains in efficiency, accuracy, memory and computational resource usage. Our PDE examples span various 4-20 dimensional steady-state solutions, including ring-shape density, arbitrary potential function, and Gaussian mixture distribution, with complicated interactions among spatial coordinates. Table 2: Experimental results of TFFN and FPNN on 4-6 dimensional SFP equations. |
| Researcher Affiliation | Academia | Feng Liu1,3,4,5 Faguo Wu1,4,5,6, Xiao Zhang2,4,5,6,7, 1School of Artificial Intelligence, Beihang University 2School of Mathematical Sciences, Beihang University 3National Superior College for Engineers, Beihang University 4Key Laboratory of Mathematics, Informatics and Behavioral Semantics, Mo E 5Zhongguancun Laboratory 6Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing 7Hangzhou International Innovation Institute, Beihang University Corresponding author. Email:EMAIL |
| Pseudocode | Yes | Algorithm 1 SRK method for steady-state data generation |
| Open Source Code | Yes | Our code is available at https://github.com/niuffs/FPNN. |
| Open Datasets | No | We provide the code for generating test datasets and fix the random seed for reproducibility, enabling future research to perform comparisons with FPNN under consistent evaluation metrics. |
| Dataset Splits | Yes | We generated a training dataset Dtrain with 20k samples using the 1.5-order SRK method... For the test dataset generation, we sample 10k initial points uniformly within Ω. ... We generate 20k training points and determine domain Ω= [ 1.2, 1.2]6. The test dataset is created by applying gradient ascent to 10k initial points... |
| Hardware Specification | Yes | The models are implemented in Py Torch framework and trained on NVIDIA Quadro RTX 8000 GPU with 48GB memory. |
| Software Dependencies | No | The models are implemented in Py Torch framework and trained on NVIDIA Quadro RTX 8000 GPU with 48GB memory. We point out that appropriate simplifications (e.g., completing the square and variable substitution) and the choice of integration order significantly impact the accuracy of results. In Table 6, we list the network settings used for plotting experimental results, where TNN and MLP represent the architectures in FPNN framework using score PDE loss. We use the Sym Py library to compute the partition function in Eq.(47). Sym Py is a powerful and versatile tool for symbolic mathematics and provides computer algebra system (CAS) capabilities directly in Python. |
| Experiment Setup | Yes | For all SFP equations in our experiments, FPNNs are trained under consistent settings: we use Adam optimizer with a learning rate of 0.01 and a batch size of 2k, resulting in 10 iterations per epoch. The network structure and prediction performance are detailed in Table 4. TFFN consists of four sub-networks of 3 hidden layers with 64 hidden feature size, updated for 20k steps using the Adam optimizer with a learning rate of 0.01, as shown in Table 6. Training data are uniformly sampled from Ω, with 2k points resampled per iteration. |