EvoMesh: Adaptive Physical Simulation with Hierarchical Graph Evolutions
Authors: Huayu Deng, Xiangming Zhu, Yunbo Wang, Xiaokang Yang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on five benchmark physical simulation datasets show that Evo Mesh outperforms recent fixed-hierarchy message passing networks by large margins. |
| Researcher Affiliation | Academia | 1Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University. Correspondence to: Yunbo Wang <EMAIL>. |
| Pseudocode | No | The paper describes the methodology using prose, equations, and diagrams (Figure 1 for architecture overview) but does not present any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The project page is available at https: //hbell99.github.io/evo-mesh/. |
| Open Datasets | Yes | We employ four established datasets from MGN (Pfaff et al., 2021): Cylinder Flow, Airfoil, Flag, and Deforming Plate. [...] We also consider a more challenging dataset, Folding Paper, where varying forces at the four corners deform paper with time-varying Lagrangian mesh graphs, generated using the ARCSim solver (Narain et al., 2012; Wu et al., 2023). |
| Dataset Splits | Yes | The Cylinder Flow, Airfoil, and Flag datasets are each split into 1,000 training sequences, 100 validation sequences, and 100 testing sequences. The Deforming Plate dataset is split into 500 training sequences, 100 validation sequences, and 100 testing sequences. This dataset [Folding Paper] is divided into 500 training sequences, 100 validation sequences, and 100 testing sequences. |
| Hardware Specification | Yes | All experiments are conducted using 4 Nvidia RTX 3090. |
| Software Dependencies | No | Evo Mesh is trained with Adam optimizer... We mainly build Evo Mesh based on the released code of BSMS-GNN (Cao et al., 2023). |
| Experiment Setup | Yes | We set K = 2 for edge enhancement... In the Gumbel-Softmax for differentiable node selection, temperature annealing decreases the temperature from 5 to 0.1 using a decay factor of γ = 0.999... Evo Mesh is trained with Adam optimizer, using an exponential learning rate decay from 10 4 to 10 6 and a decay rate of γ = 0.79 from the first 500K steps. The batch size is set to 32. |