Neural Fluid Simulation on Geometric Surfaces
Authors: Haoxiang Wang, Tao Yu, Hui Qiao, Qionghai Dai
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct numerical studies for our proposed framework. Our primary emphasis lies in verifying the efficacy of our framework in fluid dynamics across various surface representations, exploring conditioning characteristics, and demonstrating practical applications such as the Helmholtz decomposition using real-world data. |
| Researcher Affiliation | Academia | 1Department of Automation, Tsinghua University, 2BNRist, Tsinghua University, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | The computational process is showed in the pseudocode Algorithm 1 in Appendix D. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. It only mentions the use of a third-party library, JAX, for implementation. |
| Open Datasets | Yes | We take EMNIST dataset (Cohen et al., 2017) as the image input and generation the divergence-free velocity fields with the vorticity imitating the silhouettes of alphabets. [...] Finally, we apply our method to the real-world atmosphere dataset (Raoult et al., 2017). |
| Dataset Splits | No | The paper uses datasets like EMNIST and a real-world atmosphere dataset but does not explicitly provide specific details about how these datasets were split into training, validation, or test sets for their experiments. |
| Hardware Specification | Yes | Our experiments are all implemented with Jax library (Bradbury et al., 2018) on an NVIDIA Ge Force RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions implementation with 'Jax library (Bradbury et al., 2018)' but does not provide specific version numbers for JAX or any other key software dependencies. |
| Experiment Setup | Yes | Sphere Jet: We adopt the 4-layers MLP (for shaper simulation results compared with siren) with 128 units for our implementation. The learning rate is set with the exponential decay from 1e 5 to 1e 7 with 60000 steps and batch size 1000 for each time step. The time step is chosen as 5e 2. |