Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

EAGLE: Large-scale Learning of Turbulent Fluid Dynamics with Mesh Transformers

Authors: Steeven JANNY, Aurélien Bénéteau, Madiha Nadri, Julie Digne, Nicolas THOME, Christian Wolf

ICLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 EXPERIMENTS We compare our method against three competing methods for physical reasoning: Mesh Graph Net (Pfaff et al., 2021) (MGN) is a Graph Neural Network based model that relies on multiple chained message passing layers. ... We evaluate all models reporting the sum of the root mean squared error (N-RMSE) on both pressure and velocity fields...
Researcher Affiliation Collaboration Steeven Janny LIRIS, INSA Lyon, France EMAIL Aurélien Béneteau Sup Aero, France EMAIL Madiha Nadri LAGEPP, Univ. Lyon 1, France EMAIL Julie Digne LIRIS, CNRS, France EMAIL Nicolas Thome Sorbonne University, CNRS, ISIR, Paris, France EMAIL Christian Wolf Naver Labs Europe, France EMAIL
Pseudocode No The paper describes its model architecture and components in detail through text and mathematical equations (e.g., equations 1-6), but it does not include a dedicated pseudocode block or algorithm listing.
Open Source Code No The dataset will be made publically available upon publication. (This refers to the dataset, not the model's source code.) An online tool for interactive visualization is provided: 'Attention maps can be explored interactively using the online tool at https://eagle-dataset.github.io.' However, there is no explicit statement or link for the source code of the methodology itself.
Open Datasets Yes We propose EAGLE, a large-scale dataset... The dataset will be made publically available upon publication. We evaluate the method on several datasets and achieve state-of-the-art performance on two public fluid mechanics datasets (Cylinder-Flow, (Pfaff et al., 2021) and Scalar-Flow (Eckert et al., 2019)).
Dataset Splits Yes A proper train/valid/test splitting is provided ensuring that each geometry type is equally represented. The train split contains 948 simulations, while test and valid splits each contain 118 simulations.
Hardware Specification Yes GAT seems to struggle on our challenging dataset. The required increase in capacity was difficult to do for this resource hungry model, we failed even on the 40GB A100 GPUs of a high-end Nvidia DGX.
Software Dependencies No The paper mentions software like 'Ansys Fluent' for simulations and 'Adam optimizer' for training, but it does not provide specific version numbers for any software dependencies (e.g., PyTorch version, CUDA version, or Ansys Fluent version).
Experiment Setup Yes We kept the same training setup for all datasets and trained our model for 10,000 steps with the Adam optimizer and a learning rate of 10^-4 to minimize equation 6 with α= 10^-1 and H= 8.