A Neural Material Point Method for Particle-based Emulation

Authors: Omer Rochman-Sharabi, Sacha Lewin, Gilles Louppe

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct a series of experiments to demonstrate the accuracy, speed, and generalization capabilities of Neural MPM. Specifically, we examine its robustness to hyperparameter and architectural choices through an ablation study (4.1). We compare Neural MPM to GNS and DMCF in terms of accuracy, training time, convergence, and inference speed (4.2). We also evaluate the generalization capabilities of Neural MPM (4.3) and illustrate how its differentiability can be leveraged to solve an inverse design problem (4.4).
Researcher Affiliation Academia Omer Rochman-Sharabi EMAIL University of Liège; Sacha Lewin EMAIL University of Liège; Gilles Louppe EMAIL University of Liège
Pseudocode Yes Algorithm 1 Neural MPM Require: P 0, V 0, grid g, t, neural backbone h, functions p2g and g2p
Open Source Code Yes A project page is available at https://neuralmpm.isach.be. ... Our implementation, training scripts, experiment configurations, and instructions for reproducing results are publicly available at [URL]. ... The code, together with additional videos, is available at the project s website [URL].
Open Datasets Yes The first four datasets... are taken from Sanchez-Gonzalez et al. (2020) and were simulated using the Taichi-MPM simulator (Hu et al., 2018b). ... The fifth dataset, Dam Break 2D, was generated using SPH... The last dataset, Variable Gravity, was also generated using Taichi-MPM.
Dataset Splits Yes The first two datasets contain random ramp obstacles to challenge the model s generalization capacity. The fourth dataset, Multi Material, mixes the three materials together in the same simulations. These four datasets are taken from Sanchez-Gonzalez et al. (2020) and were simulated using the Taichi-MPM simulator (Hu et al., 2018b). They each contain 1000 trajectories for training and 30 (Goop) or 100 (Water Ramps, Sand Ramps, Multi Material) for validation and testing. The fifth dataset, Dam Break 2D, was generated using SPH and contains 50 trajectories for learning, and 25 for validation and testing. The last dataset, Variable Gravity, was also generated using Taichi-MPM. It consists of simulations with variable gravity of a water-like material, and contains 1000 trajectories for training and 100 for validation and testing.
Hardware Specification Yes We run all our experiments using the same hardware: 4 CPUs, 24GB of RAM, and an NVIDIA RTX A5000 GPU with 24GB of VRAM. For reproducing the results of DMCF, we kept the A5000 GPU but it required up to 96GB of RAM for training.
Software Dependencies No We implement Neural MPM using Py Torch (Paszke et al., 2019), and use Py Torch Geometric (Fey & Lenssen, 2019) for implementing efficient particle-to-grid functions, more specifically from the Scatter and Cluster modules. ... The voxelization procedure used is implemented using CUDA by Py Torch Cluster Fey & Lenssen (2019), p2g is implemented by us based on the voxelized representation, and g2p uses Py Torch s bilinear interpolation (grid_sample) Paszke et al. (2019) based on the voxelized representation of the data.
Experiment Setup Yes We use Adam (Kingma & Ba, 2014) with the following learning rate schedule: a linear warm-up over 100 steps from 10 5 to 10 3, 900 steps at 10 3, then a cosine annealing (Loshchilov & Hutter, 2017) for 100, 000 iterations. We use a batch size of 128, K = 4 autoregressive steps per iteration, bundle m = 8 timesteps per model call (resulting in 24 predicted states), and a grid size of 64 64. For most of our experiments, we use a U-Net (Ronneberger et al., 2015) with three downsampling blocks with a factor of 2, 64 hidden channels, a kernel size of 3, and MLPs with three hidden layers of size 64 for pixel-wise encoding and decoding into a latent space.