Designing Mechanical Meta-Materials by Learning Equivariant Flows

Authors: Mehran Mirramezani, Anne Meeussen, Katia Bertoldi, Peter Orbanz, Ryan P Adams

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate simulated mechanical behaviors of these new designs against fabricated real-world prototypes. We find that designs with higher-order symmetries can exhibit a wider range of behaviors.
Researcher Affiliation Academia Mehran Mirramezani Department of Computer Science Princeton University EMAIL Anne S. Meeussen & Katia Bertoldi School of Engineering and Applied Sciences Harvard University EMAIL Peter Orbanz Gatsby Computational Neuroscience Unit University College London EMAIL Ryan P. Adams Department of Computer Science Princeton University EMAIL
Pseudocode No The paper describes the methodology using mathematical formulations and descriptive text, but it does not contain a clearly labeled pseudocode block or algorithm.
Open Source Code No The paper mentions the use of third-party open-source tools like pygmsh and JAX, but there is no explicit statement about the authors releasing their own source code for the methodology described in this paper, nor is a repository link provided.
Open Datasets No The paper does not explicitly mention the use of any publicly available or open datasets for training or evaluation. The research focuses on designing and simulating mechanical meta-materials using physical models and then validating some designs with real-world prototypes.
Dataset Splits No The paper does not describe the use of any external datasets, therefore, there are no mentions of training, testing, or validation dataset splits.
Hardware Specification Yes Using a single NVIDIA Ge Force RTX 2080 Ti GPU, each optimization step (which requires forward simulation and gradient computation) requires approximately 60 seconds;
Software Dependencies No The paper mentions software such as JAX and pygmsh, but does not provide specific version numbers for these or any other key software dependencies.
Experiment Setup Yes The function hθ in Section 3 is represented by a fully-connected neural network with two hidden layers of size 10, with tanh nonlinearity. To optimize these parameters, we use ADAM, with a learning rate of 0.001. Each optimization is performed several times with different initialization of the neural network parameters. [...] neural network parameters are optimized until a loss value less than 0.0001 is achieved, where for Poisson s ratio designs a loss value less than 0.01 was used for stopping simulations.